首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 112 毫秒
1.
Quinn Thomson 《工程优选》2013,45(6):615-633
This article presents an adaptive accuracy trust region (AATR) optimization strategy where cross-validation is used by the trust region to reduce the number of sample points needed to construct metamodels for each step of the optimization process. Lower accuracy metamodels are initially used for the larger trust regions, and higher accuracy metamodels are used for the smaller trust regions towards the end of optimization. Various metamodelling strategies are used in the AATR algorithm: optimal and inherited Latin hypercube sampling to generate experimental designs; quasi-Newton, kriging and polynomial regression metamodels to approximate the objective function; and the leave-k-out method for validation. The algorithm is tested with two-dimensional single-discipline problems. Results show that the AATR algorithm is a promising method when compared to a traditional trust region method. Polynomial regression in conjunction with a new hybrid inherited-optimal Latin hypercube sampling performed the best.  相似文献   

2.
H. Linhart 《技术计量学》2013,55(3):287-292
A shot noise X(t) is a superposition of impulses occurring at times tj, where the Tj follow a Poisson point process. For the case of a rectangular impulse function (height a, length b) the cumulant generating function of the sample mean M(T), (2), is obtained, also the mean and variance of the sample variance V(T), (3). This, together with the covariance of V and M, is used to calculate the standard error of the coefficient of dispersion V/M. An approximate distribution is conjectured for I(T) = V(T)/aM(T) and based on it, percentage points of this statistic are given. They could be used to test the hypothesis that a given noise is shot noise. An application to a textile problem is described.  相似文献   

3.
In this paper, an efficient numerical method to solve sliding contact problems is proposed. Explicit formulae for the Gauss–Jacobi numerical integration scheme appropriate for the singular integral equations of the second kind with Cauchy kernels are derived. The resulting quadrature formulae for the integrals are valid at nodal points determined from the zeroes of a Jacobi polynomial. Gaussian quadratures obtained in this manner involve fixed nodal points and are exact for polynomials of degree 2n ? 1, where n is the number of nodes. From this Gauss–Jacobi quadrature, the existing Gauss–Chebyshev quadrature formulas can be easily derived. Another apparent advantage of this method is its ability to capture correctly the singular or regular behaviour of the tractions at the edge of the region of contact. Also, this analysis shows that once if the total normal load and the friction coefficient are given, the external moment M and contact eccentricity e (for incomplete contact) in fully sliding contact are uniquely determined. Finally, numerical solutions are computed for two typical contact cases, including sliding Hertzian contact and a sliding contact between a flat punch with rounded corners pressed against the flat surface of a semi‐infinite elastic solid. These results provide a demonstration of the validity of the proposed method. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

4.
In this paper, the differential quadrature method is used to solve first‐order initial value problems. The initial condition is given at the beginning of a time interval. The time derivative at a sampling grid point within the time interval can be expressed as a weighted linear sum of the given initial condition and the function values at the sampling grid points within the time interval. The order of accuracy and the stability property of the quadrature solutions depend on the locations of the sampling grid points. It is shown that the order of accuracy of the quadrature solutions at the end of a time interval can be improved to 2n–1 or 2n if the n sampling grid points are chosen carefully. In fact, the approximate solutions are equivalent to the generalized Padé approximations. The resultant algorithms are therefore unconditionally stable with controllable numerical dissipation. The corresponding sampling grid points are found to be given by the roots of the modified shifted Legendre polynomials. From the numerical examples, the accuracy of the quadrature solutions obtained by using the proposed sampling grid points is found to be better than those obtained by the commonly used uniformly spaced or Chebyshev–Gauss–Lobatto sampling grid points. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

5.
为减少实测环境中噪声的干扰,提出了一种基于频响函数奇异值的模型修正方法。利用计算得到的频响函数重构吸引子矩阵,对其进行奇异值分解,并在受噪声影响时根据极值点数量突变原则选择保留主要特征信息的奇异值个数,确定待修正参数;采用拉丁超立方抽样抽取初始样本点,结合修正参数所对应的奇异值响应,用粒子群算法寻得最优相关系数,构建Kriging模型;以奇异值响应差的平方最小构造目标函数,利用布谷鸟算法求解参数修正值。仿真算例表明:以奇异值作为结构响应,构建Kriging模型能获得较高的修正精度;在频响函数中加入不同信噪比的高斯白噪声,仍能得到较满意的修正效果,证明了该方法对噪声具有较强的鲁棒性。  相似文献   

6.
Computing the Information Content of Decoupled Designs   总被引:1,自引:1,他引:0  
The information content of uncoupled designs can be computed by summing the information content associated with each functional requirement. This paper proves that information cannot be summed for decoupled designs. To overcome this problem, this paper presents two algorithms for computing information content of decoupled designs. One algorithm is applicable to any joint probability density function for the design parameters; the second algorithm applies only to uniformly distributed design parameters. The algorithm for uniform distributions is based on a recursive procedure for computing the volume of a convex polytope in n -dimensional real space, where n is the number of design parameters. An engineering application of the algorithms is presented. The example demonstrates that summing information content can significantly over-estimate total information when compared to an algorithm that accounts for correlation. The example also demonstrates that decoupled designs can have lower information content than uncoupled systems with the same functional requirements and similar components.  相似文献   

7.
In many applications, several conflicting objectives have to be optimized concurrently leading to a multi-objective optimization problem. Since the set of solutions, the so-called Pareto set, typically forms a (k?1)-dimensional manifold, where k is the number of objectives considered in the model, continuation methods such as predictor–corrector (PC) methods are in certain cases very efficient tools for rapidly computing a finite size representation of the set of interest. However, their classical implementation leads to trouble when considering higher-dimensional models (i.e. for dimension n>1000 of the parameter space). In this work, it is proposed to perform a successive approximation of the tangent space which allows one to find promising predictor points with less effort in particular for high-dimensional models since no Hessians of the objectives have to be calculated. The applicability of the resulting PC variant is demonstrated on a benchmark model for up to n=100, 000 parameters.  相似文献   

8.
X.M. Kong  Y.R. Fan  Y.P. Li 《工程优选》2016,48(4):562-581
In this study, a duality theorem-based algorithm (DTA) for inexact quadratic programming (IQP) is developed for municipal solid waste (MSW) management under uncertainty. It improves upon the existing numerical solution method for IQP problems. The comparison between DTA and derivative algorithm (DAM) shows that the DTA method provides better solutions than DAM with lower computational complexity. It is not necessary to identify the uncertain relationship between the objective function and decision variables, which is required for the solution process of DAM. The developed method is applied to a case study of MSW management and planning. The results indicate that reasonable solutions have been generated for supporting long-term MSW management and planning. They could provide more information as well as enable managers to make better decisions to identify desired MSW management policies in association with minimized cost under uncertainty.  相似文献   

9.
This paper develops an efficient method to analyse the behaviour of an unreliable n-stage transfer line with (n ? 1) finite inter-stage storage buffers. The n-stage line is decomposed into (n ? 1) aggregate two-stage lines, for which analytical solutions are available. The method is developed through the examination of the steady state behaviour of the n-stage line and the decomposed lines, and the relationship between the failure and repair rates of the individual stages and the aggregate stages. Numerical and simulation experiments show that the method is efficient in computation and performs quite well.  相似文献   

10.
In this paper, a series of advanced searching algorithms have been examined and implemented for accelerating multi‐axial fatigue cycle counting efforts when dealing with large time histories. In a computerized calculation of the path‐length dependent cycle counting method, most of the central processor unit's (CPU) time is spent on searching for the maximum range or distance in a stress or strain space. A brute‐force search is the simplest to implement, and will always find a solution if it exists. However, its cost, in many practical problems, tends to grow exponentially as the size of the loading spectrum increases with a search time measured in the order of O(n2), where n is the number of spectrum data points. In contrast, a form of Andrew's monotone chain algorithm, as demonstrated in this paper, can remarkably reduce the solution time to the order of O(n log n). The effectiveness of the new path‐length searching procedure is demonstrated by a series of worked examples with a varying degree of non‐proportionality in multi‐axial loading history.  相似文献   

11.
A two-robot flow-shop scheduling problem with n identical jobs and m machines is defined and evaluated for four robot collaboration levels corresponding to different levels of information sharing, learning and assessment: Full – robots work together, performing self and joint learning sharing full information; Pull – one robot decides when and if to learn from the other robot; Push – one robot may force the second to learn from it and None – each robot learns independently with no information sharing. Robots operate on parallel tracks, transporting jobs between successive machines, returning empty to a machine to move another job. The objective is to obtain a robot schedule that minimises makespan (Cmax) for machines with varying processing times. A new reinforcement learning algorithm is developed, using dual Q-learning functions. A novel feature in the collaborative algorithm is the assignment of different reward functions to robots; minimising robot idle time and minimising job waiting time. Such delays increase makespan. Simulation analyses with fast, medium and slow speed robots indicated that Full collaboration with a fast–fast robot pair was best according to minimum average upper bound error. The new collaborative algorithm provides a tool for finding optimal and near-optimal solutions to difficult collaborative multi-robot scheduling problems.  相似文献   

12.
The optimization of a two-parameter river catchment simulation model is described. The parameter K1 controls the rate of infiltration into the soil, and the second parameter, K2, is used in the routing equation. The Simplex direct search method is used and is implemented on a hybrid computer. The computer programme forms an n-dimensional “simplex” (n being the number of parameters) from initial trial values of the parameters, and uses the basic operations of reflection, expansion and contraction to find the optimum value of the objective function. The operations are carried out according to the value of the objective function at each apex of the “simplex.” The peripheral devices linked to the hybrid computer are used to give a continuous display of the optimization process, and the effects of systematically varying the parameters are studied by plotting sample hydrographs for various pairs of parameter values and by plotting the surface of the objective function.  相似文献   

13.
For multiple-objective optimization problems, a common solution methodology is to determine a Pareto optimal set. Unfortunately, these sets are often large and can become difficult to comprehend and consider. Two methods are presented as practical approaches to reduce the size of the Pareto optimal set for multiple-objective system reliability design problems. The first method is a pseudo-ranking scheme that helps the decision maker select solutions that reflect his/her objective function priorities. In the second approach, we used data mining clustering techniques to group the data by using the k-means algorithm to find clusters of similar solutions. This provides the decision maker with just k general solutions to choose from. With this second method, from the clustered Pareto optimal set, we attempted to find solutions which are likely to be more relevant to the decision maker. These are solutions where a small improvement in one objective would lead to a large deterioration in at least one other objective. To demonstrate how these methods work, the well-known redundancy allocation problem was solved as a multiple objective problem by using the NSGA genetic algorithm to initially find the Pareto optimal solutions, and then, the two proposed methods are applied to prune the Pareto set.  相似文献   

14.
Cryogenic molecular beam experiments show that the ferroelectric dipole moments of small niobium clusters with an even number of valence electrons n are typically greater than those with odd n. This is verified in alloy clusters Nb N X M where X=Al, Au, O, Mn, Fe, and Co; N≤100;M≤3. Like in superconducting alloys, Mn doping quenches the effect while Al and Au enhance it, suggesting a relation between cluster ferroelectricity and bulk superconductivity. A correlated ground state is proposed where the even-odd effect is caused by the depolarizing effect of a single unpaired electron.  相似文献   

15.
When sampling is carried out independently for the K strata of a finite stratified dichotomous population (defectives vs. standard items), and the number Zi of defectives per stratum sample is observed, the corresponding probability function for X = (Xi , …, xK ) is the product of hypergeometric functions which depend on the sample sizes ni , the stratum sizes Ni , and the number of defectives Mi in the stratum (i = 1, …, K). It is assumed that prior information is available about the Mi 's which can be expressed, by suitable choice of the parameters ai and bi , as the product of independent hyperbinomial functions.

In each stratum the cost per observation is a known constant. Using squared error loss function, the prior Bayes risk is found for the linear function of interest,

and the optimum allocation of sample sizes is found, the one for which the prior Bayes risk is minimum when the total sampling budget is fixed.  相似文献   

16.
We propose an algorithm for the global optimization of expensive and noisy black box functions using a surrogate model based on radial basis functions (RBFs). A method for RBF-based approximation is introduced in order to handle noise. New points are selected to minimize the total model uncertainty weighted against the surrogate function value. The algorithm is extended to multiple objective functions by instead weighting against the distance to the surrogate Pareto front; it therefore constitutes the first algorithm for expensive, noisy and multiobjective problems in the literature. Numerical results on analytical test functions show promise in comparison to other (commercial) algorithms, as well as results from a simulation based optimization problem.  相似文献   

17.
In this paper, we deal with the single-row equidistant facility layout problem (SREFLP), which asks to find a one-to-one assignment of n facilities to n locations equally spaced along a straight line so as to minimize the sum of the products of the flows and distances between facilities. We develop a branch-and-bound algorithm for solving this problem. The lower bound is computed first by performing transformation of the flow matrix and then applying the well-known Gilmore–Lawler bounding technique. The algorithm also incorporates a dominance test which allows to drastically reduce redundancy in the search process. The test is based on the use of a tabu search procedure designed to solve the SREFLP. We provide computational results for problem instances of size up to 35 facilities. For a number of instances, the optimal value of the objective function appeared to be smaller than the best value reported in the literature.  相似文献   

18.
In this paper, the general problem of chemical process optimization defined by a computer simulation is formulated. It is generally a nonlinear, non-convex, non-differentiable optimization problem over a disconnected set. A brief overview of popular optimization methods from the chemical engineering literature is presented. The recent mesh adaptive direct search (MADS) algorithm is detailed. It is a direct search algorithm, so it uses only function values and does not compute or approximate derivatives. This is useful when the functions are noisy, costly or undefined at some points, or when derivatives are unavailable or unusable. In this work, the MADS algorithm is used to optimize a spent potliners (toxic wastes from aluminum production) treatment process. In comparison with the best previously known objective function value, a 37% reduction is obtained even if the model failed to return a value 43% of the time.  相似文献   

19.
A prominent problem in airline crew scheduling is the pairings or Tour-of-Duty planning problem. The objective is to determine a set of pairings (or Tours-of-Duty) for a crew group to minimise the planned cost of operating a schedule of flights. However, due to unforeseen events the performance in operation can differ considerably from planning, sometimes causing significant additional recovery costs. In recent years there has been a growing interest in robust crew scheduling. Here, the aim is to find solutions that are “cheap” in terms of planned cost as well as being robust, meaning that they are less likely to be disrupted in case of delays. Taking the stochastic nature of delays into account, Yen and Birge (Transp Sci 40:3–14, 2006) formulate the problem as a two-stage stochastic integer programme and develop an algorithm to solve this problem. Based on the contradictory nature of the goals, Ehrgott and Ryan (J Multi-Criteria Decis Anal 11:139–150, 2002) formulate a bi-objective set partitioning model and employ elastic constraint scalarisation to enable the solution by set partitioning algorithms commercially used in crew scheduling software. In this study, we compare the two solution approaches. We improve the algorithm of Yen and Birge (Transp Sci 40:3–14, 2006) and implement both methods with a commercial crew scheduling software. The results of both methods are compared with respect to characteristics of robust solutions, such as the number of aircraft changes for crew. We also conduct experiments to simulate the performance of the obtained solutions. All experiments are performed using actual schedule data from Air New Zealand.  相似文献   

20.
A ‘multiple determinant parabolic interpolation method’ is described and evaluated, principally by using a plane frame test-bed program. It is intended primarily for solving the transcendental eigenvalue problems arising when the ‘exact’ member equations obtained by solving the governing differential equations of members are used to find the eigenvalues (i.e. critical buckling loads or undamped natural frequencies) of structures. The method has five stages which together ensure successful convergence on the required eigenvalues in all circumstances. Thus, whenever checks indicate its suitability, parabolic interpolation is used to obtain eigenvalues more rapidly than would the popular bisection alternative. The checks also ensure a wise choice of the determinant used by the interpolation. The determinants available are all usually zero at eigenvalues, and comprise the determinant of the overall stiffness matrix K n and the determinants which result, with negligible extra computation, from effectively considering all except the last m (m=1, 2,…, n?1) freedoms to which K n corresponds as internal substructure freedoms. Tests showed time savings compared to bisection of 31 per cent when finding non-coincident eigenvalues to relative accuracy ? = 10?4, increasing to 62 per cent when ? = 10?8. The tests also showed time savings of about 10 per cent compared with an earlier Newtonian approach. The method requires no derivatives and its use in the widely available space frame program BUNVIS-RG has demonstrated how easily it can replace bisection, which was used in the earlier program BUNVIS.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号