首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Optimal design of multi-response experiments for estimating the parameters of multi-response linear models is a challenging problem. The main drawback of the existing algorithms is that they require the solution of many optimization problems in the process of generating an optimal design that involve cumbersome manual operations. Furthermore, all the existing methods generate approximate design and no method for multi-response n-exact design has been cited in the literature. This paper presents a unified formulation for multi-response optimal design problem using Semi-Definite Programming (SDP) that can generate D-, A- and E-optimal designs. The proposed method alleviates the difficulties associated with the existing methods. It solves a one-shot optimization model whose solution selects the optimal design points among all possible points in the design space. We generate both approximate and n-exact designs for multi-response models by solving SDP models with integer variables. Another advantage of the proposed method lies in the amount of computation time taken to generate an optimal design for multi-response models. Several test problems have been solved using an existing interior-point based SDP solver. Numerical results show the potentials and efficiency of the proposed formulation as compared with those of other existing methods. The robustness of the generated designs with respect to the variance-covariance matrix is also investigated.  相似文献   

2.
Generalized applications of modern numerical analysis methods—while digital computers experienced a fast development—produced a first revolution in design techniques, allowing one to perform computations considered unapproachable until that time. Introduction of Computer Aided Design (CAD) techniques—while high-performance graphic peripherals experience a fast development—is actually producing a second revolution, by making easy and fast most routine design tasks. However, the introduction of Computer Aided OPTIMUM Design techniques has not yet produced the expected third revolution, in spite of the big amount of research and the interest of its potential applications. The authors think that this fact is due mainly to the dispersion of the optimum design research, and to the lack of a well established doctrine. In this paper we approach the design process from a general methodological perspective, suitable to be applied to a wide range of problems. The design process is organized in several related levels. This approach leads naturally to the concept of optimum design and to the statement of a general mathematical programming problem. The practical application of this methodology to any particular problem takes an efficient and modular form. First and second order sensitivity analysis techniques are introduced from the general formulation, and alternative techniques (adjoint state) of the direct differentiation method are discussed. DAO2, a powerful and versatile computer aided optimum design system by the Finite Element Method, has been developed by the authors1 according to this general methodology. The system can solve efficiently 2D and 3D structural fixed-geometry and shape optimization problems. The power and viability of this methodology is illustrated by the solution to a structural optimization problem. The shape of the central section of an arch dam is optimized. A linear elastic structural FEM analysis is simultaneously performed for plane stress and for radial symmetry—while constraints are imposed for several load cases—taking into account the construction and loading stages. It should be emphasized that the same optimum design is reached in a small number of iterations starting from two significantly different initial designs.  相似文献   

3.
Finding optimum conditions for process factors in an engineering optimization problem with response surface functions requires structured data collection using experimental design. When the experimental design space is constrained owing to external factors, its design space may form an asymmetrical and irregular shape and thus standard experimental design methods become ineffective. Computer-generated optimal designs, such as D-optimal designs, provide alternatives. While several iterative exchange algorithms for D-optimal designs are available for a linearly constrained irregular design space, it has not been clearly understood how D-optimal design points need to be generated when the design space is nonlinearly constrained and how response surface models are optimized. This article proposes an algorithm for generating the D-optimal design points that satisfy both feasibility and optimality conditions by using piecewise linear functions on the design space. The D-optimality-based response surface design models are proposed and optimization procedures are then analysed.  相似文献   

4.
A fifth-order family of an iterative method for solving systems of nonlinear equations and highly nonlinear boundary value problems has been developed in this paper. Convergence analysis demonstrates that the local order of convergence of the numerical method is five. The computer algebra system CAS-Maple, Mathematica, or MATLAB was the primary tool for dealing with difficult problems since it allows for the handling and manipulation of complex mathematical equations and other mathematical objects. Several numerical examples are provided to demonstrate the properties of the proposed rapidly convergent algorithms. A dynamic evaluation of the presented methods is also presented utilizing basins of attraction to analyze their convergence behavior. Aside from visualizing iterative processes, this methodology provides useful information on iterations, such as the number of diverging-converging points and the average number of iterations as a function of initial points. Solving numerous highly nonlinear boundary value problems and large nonlinear systems of equations of higher dimensions demonstrate the performance, efficiency, precision, and applicability of a newly presented technique.  相似文献   

5.
I. U. Cagdas 《工程优选》2013,45(4):453-469
The optimum designs are given for clamped-clamped columns under concentrated and distributed axial loads. The design objective is the maximization of the buckling load subject to volume and maximum stress constraints. The results for a minimum area constraint are also obtained for comparison. In the case of a stress constraint, the minimum thickness of an optimal column is not known a priori, since it depends on the maximum buckling load, which in turn depends on the minimum thickness necessitating an iterative solution. An iterative solution method is developed based on finite elements, and the results are obtained for n=1, 2, 3 defined as I n A n , with I being the moment of inertia, and A the cross-sectional area. The iterations start using the unimodal optimality condition and continue with the bimodal optimality condition if the second buckling load becomes less than or equal to the first one. Numerical results show that the optimal columns become larger in the direction of the distributed load due to the increase in the stress in this direction. Even though the optimal columns are symmetrical with respect to their mid-points when the compressive load is concentrated at the end-points, in the case of the columns subject to distributed axial loads the optimal shapes are unsymmetrical.  相似文献   

6.
 Finite Element (FE) method is among the most powerful tools for crash analysis and simulation. Crashworthiness design of structural members requires repetitive and iterative application of FE simulation. This paper presents a crashworthiness design optimization methodology based on efficient and effective integration of optimization methods, FE simulations, and approximation methods. Optimization methods, although effective in general in solving structural design problems, loose their power in crashworthiness design. Objective and constraint functions in crashworthiness optimization problems are often non-smooth and highly non-linear in terms of design variables and follow from a computationally costly (FE) simulation. In this paper, a sequential approximate optimization method is utilized to deal with both the high computational cost and the non-smooth character. Crashworthiness optimization problem is divided into a series of simpler sub-problems, which are generated using approximations of objective and constraint functions. Approximations are constructed by using statistical model building technique, Response Surface Methodology (RSM) and a Genetic algorithm. The approximate optimization method is applied to solve crashworthiness design problems. These include a cylinder, a simplified vehicle and New Jersey concrete barrier optimization. The results demonstrate that the method is efficient and effective in solving crashworthiness design optimization problems. Received: 30 January 2002 / Accepted: 12 July 2002 Sponsorship for this research by the Federal Highway Administration of US Department of Transportation is gratefully acknowledged. Dr. Nielen Stander at Livermore Software Technology Corporation is also gratefully acknowledged for providing subroutines to create D-optimal experimental designs and the simplified vehicle model.  相似文献   

7.
The storage requirements and performance consequences of a few different data parallel implementations of the finite element method for domains discretized by three-dimensional brick elements are reviewed. Letting a processor represent a nodal point per unassembled finite element yields a concurrency that may be one to two orders of magnitude higher for common elements than if a processor represents an unassembled finite element. The former representation also allows for higher order elements with a limited amount of storage per processor. A totally parallel stiffness matrix generation algorithm is presented. The equilibrium equations are solved by a conjugate gradient method with diagonal scaling. The results from several simulations designed to show the dependence of the number of iterations to convergence upon the Poisson ratio, the finite element discretization and the element order are reported. The domain was discretized by three-dimensional Lagrange elements in all cases. The number of iterations to convergence increases with the Poisson ratio. Increasing the number of elements in one special dimension increases the number of iterations to convergence, linearly. Increasing the element order p in one spatial dimension increases the number of iterations to convergence as pα, where α is 1·4–1·5 for the model problems.  相似文献   

8.
The high computational cost of evaluating objective functions in electromagnetic optimum design problems necessitates the use of cost-effective techniques. The paper discusses the use of one popular technique, surrogate modelling, with emphasis placed on the importance of considering both the accuracy of, and uncertainty in, the surrogate model. After a brief review of how such considerations have been made in the single-objective optimisation of electromagnetic devices, their use with kriging surrogate models is investigated. Traditionally, space-filling experimental designs are used to construct the initial kriging model, with the aim of maximising the accuracy of the initial surrogate model, from which the optimisation search will start. Utility functions, which balance the predictions made by this model with its uncertainty, are often used to select the next point to be evaluated. In this paper, the performances of several different utility functions are examined, with experimental designs that yield initial kriging models of varying degrees of accuracy. It is found that no advantage is necessarily achieved through a search for optima using utility functions on initial kriging models of higher accuracy, and that a reduction in the total number of objective function evaluations can be achieved if the iterative optimisation search is started earlier with utility functions on kriging models of lower accuracy. The implications for electromagnetic optimum design are discussed  相似文献   

9.
A formal method for subjective design evaluation with multiple attributes   总被引:7,自引:5,他引:2  
This paper contributes toward a more formal theory and methodology for design by mathematically modeling the functional relationships between design decisions and the ultimate overall worth of a design. The conventional approach to design evaluation is limited in two respects. First, the direct measurement of attribute performance levels does not reflect the subsequentworth to the designer. Second, ad hoc methods for determining the relative importance or priority of attributes do not accurately quantify beneficial attribute tradeoffs. This information is critical to the iterative redesign process. A formal Methodology for the Evaluation of Design Alternatives (MEDA) is presented which resolves these problems and can be used to evaluate design alternatives in the iterative design/redesign process. Multiattribute utility analysis is employed to compare the overall utility or value of alternative designs as a function of the levels of several performance characteristics of a manufactured system. The evaluation function reflects the designer's preferences for sets of multiple attributes. Sensitivity analysis provides a quantitative basis for modifying a design to increase its utility to the decision-maker. Improvements in one or more areas of performance and tradeoffs between attributes which would increase desirability of a design most are identified. A case study of materials selection and design in the automotive industry is presented which illustrates the steps followed in application of the method.  相似文献   

10.
Input Variable Expansion (IVE) is a domain-independent, algorithmic methodology for generating new designs. These designs are based on a known design which is cast as an optimization problem, described by its first principle equations. IVE performs design space expansion by replicating the topology of the initial design, assigning independent properties to each region, and distributing a selected input to the newly created regions. Optimization information is employed in the selection of the distributed input.The resulting design is optimized, using symbolic optimization techniques when possible. In more complex and industrially relevant problems where symbolic methods are more difficult, numerical methods are used to optimize the resulting designs. Trends over generations of designs are observed and the limiting designs are induced. These designs incorporate new features, and may exhibit either an improved objective or a feasible design space replacing an infeasible one.IVE is a complementary expansion technique to Dimensional Variable Expansion (DVE), developed by Cagan and Agogino (1991a). Together, IVE and DVE initiate a library of design space expansion techniques which, in some cases, eliminate the need for prepostulated superstructures for finding the optimal solution. IVE is demonstrated in the designs of a catalyst bed, a set of columns under axial load, and a chemical reactor network.  相似文献   

11.
This paper deals with topology optimization of load‐carrying structures defined on discretized continuum design domains. In particular, the minimum compliance problem with stress constraints is considered. The finite element method is used to discretize the design domain into n finite elements and the design of a certain structure is represented by an n‐dimensional binary design variable vector. In order to solve the problems, the binary constraints on the design variables are initially relaxed and the problems are solved with both the method of moving asymptotes and the sparse non‐linear optimizer solvers for continuous optimization in order to compare the two solvers. By solving a sequence of problems with a sequentially lower limit on the amount of grey allowed, designs that are close to ‘black‐and‐white’ are obtained. In order to get locally optimal solutions that are purely {0, 1}n, a sequential linear integer programming method is applied as a post‐processor. Numerical results are presented for some different test problems. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

12.
Design domain identification with desirable attributes (e.g. feasibility, robustness and reliability) provides advantages when tackling large-scale engineering optimization problems. For the purpose of dealing with feasibility robustness design problems, this article proposes a root cause analysis (RCA) strategy to identify desirable design domains by investigating the root causes of performance indicator variation for the starting sampling initiation of evolutionary algorithms. The iterative dichotomizer 3 method using a decision tree technique is applied to identify reduced feasible design domain sets. The robustness of candidate domains is then evaluated through a probabilistic principal component analysis-based criterion. The identified robust design domains enable optimal designs to be obtained that are relatively insensitive to input variations. An analytical example and an automotive structural optimization problem are demonstrated to show the validity of the proposed RCA strategy.  相似文献   

13.
A new unified theory underlying the theoretical design of linear computational algorithms in the context of time dependent first‐order systems is presented. Providing for the first time new perspectives and fresh ideas, and unlike various formulations existing in the literature, the present unified theory involves the following considerations: (i) it leads to new avenues for designing new computational algorithms to foster the notion of algorithms by design and recovering existing algorithms in the literature, (ii) describes a theory for the evolution of time operators via a unified mathematical framework, and (iii) places into context and explains/contrasts future new developments including existing designs and the various relationships among the different classes of algorithms in the literature such as linear multi‐step methods, sub‐stepping methods, Runge–Kutta type methods, higher‐order time accurate methods, etc. Subsequently, it provides design criteria and guidelines for contrasting and evaluating time dependent computational algorithms. The linear computational algorithms in the context of first‐order systems are classified as distinctly pertaining to Type 1, Type 2, and Type 3 classifications of time discretized operators. Such a distinct classification, provides for the first time, new avenues for designing new computational algorithms not existing in the literature and recovering existing algorithms of arbitrary order of time accuracy including an overall assessment of their stability and other algorithmic attributes. Consequently, it enables the evaluation and provides the relationships of computational algorithms for time dependent problems via a standardized measure based on computational effort and memory usage in terms of the resulting number of equation systems and the corresponding number of system solves. A generalized stability and accuracy limitation barrier theorem underlies the generic designs of computational algorithms with arbitrary order of accuracy and establishes guidelines which cannot be circumvented. In summary, unlike the traditional approaches and classical school of thought customarily employed in the theoretical development of computational algorithms, the unified theory underlying time dependent first‐order systems serves as a viable avenue to foster the notion of algorithms by design. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

14.
A variation of the extended finite element method for three‐dimensional fracture mechanics is proposed. It utilizes a novel form of enrichment and point‐wise and integral matching of displacements of the standard and enriched elements in order to achieve higher accuracy, optimal convergence rates, and improved conditioning for two‐dimensional and three‐dimensional crack problems. A bespoke benchmark problem is introduced to determine the method's accuracy in the general three‐dimensional case where it is demonstrated that the proposed approach improves the accuracy and reduces the number of iterations required for the iterative solution of the resulting system of equations by 40% for moderately refined meshes and topological enrichment. Moreover, when a fixed enrichment volume is used, the number of iterations required grows at a rate which is reduced by a factor of 2 compared with standard extended finite element method, diminishing the number of iterations by almost one order of magnitude. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
In many large engineering design problems, it is not computationally feasible or realistic to store Jacobians or Hessians explicitly. Matrix-free implementations of standard optimization methods—implementations that do not explicitly form Jacobians and Hessians, and possibly use quasi-Newton approximations—circumvent those restrictions, but such implementations are virtually non-existent. We develop a matrix-free augmented-Lagrangian algorithm for nonconvex problems with both equality and inequality constraints. Our implementation is developed in the Python language, is available as an open-source package, and allows for approximating Hessian and Jacobian information.We show that our approach solves problems from the CUTEr and COPS test sets in a comparable number of iterations to state-of-the-art solvers. We report numerical results on a structural design problem that is typical in aircraft wing design optimization. The matrix-free approach makes solving problems with thousands of design variables and constraints tractable, even when function and gradient evaluations are costly.  相似文献   

16.
There are three characteristics in engineering design optimization problems: (1) the design variables are often discrete physical quantities; (2) the constraint functions often cannot be expressed analytically in terms of design variables; (3) in many engineering design applications, critical constraints are often ‘pass–fail’, ‘0–1’ type binary constraints. This paper presents a sequential approximation method specifically for engineering optimization problems with the three characteristics. In this method a back-propagation neural network is trained to simulate a rough map of the feasible domain formed by the constraints using a few representative training data. A training data point consists of a discrete design point and whether this design point is feasible or infeasible. Function values of the constraints are not required. A search algorithm then searches for the optimal point in the feasible domain simulated by the neural network. This new design point is checked against the true constraints to see whether it is feasible, and is then added to the training set. The neural network is trained again with this added information, in the hope that the network will better simulate the boundary of the feasible domain of the true optimization problem. Then a further search is made for the optimal point in this new approximated feasible domain. This process continues in an iterative manner until the approximate model locates the same optimal point in consecutive iterations. A restart strategy is also employed so that the method may have a better chance to reach a global optimum. Design examples with large discrete design spaces and implicit constraints are solved to demonstrate the practicality of this method.  相似文献   

17.
If we assume no higher order interactions for the 2n3m factorial series of designs, then relaxing the restrictions concerning equal frequency for the factors and complete orthogonality for each estimate permits considerable savings in the number of runs required to estimate all the main effects and two-factor interactions. Three construction techniques are discussed which yield designs providing orthogonal estimates of all the main effects and allowing estimation of all the two-factor interactions. These techniques are: (i) collapsing of factors in symmetrical fractionated 3m–p designs, (ii) conjoining fractionated designs, and (iii) combinations of (i) and (ii). Collapsing factors in a design either maintains or increases the resolution of the original design, but does not decrease it. Plans are presented for certain values of (n, m) as examples of the construction techniques. Systematic methods of analysis are also discussed.  相似文献   

18.
This paper shows that economic statistical design can provide for better statistical properties without significantly increasing optimal total costs. Cost comparisons between optimal economic statistical designs and optimal economic designs show no significant cost increases. The average run length (ARL) constraints added by economic statistical design significantly improve the statistical properties of the control chart scheme. False alarm frequency is limited while keeping good shift detection characteristics. In addition, the Multivariate Exponentially Weighted Moving Average (MEWMA) control schemes performed better from the cost standpoint than the benchmark pure statistical design—the Hotelling T2 control chart. This improvement held for unconstrained and constrained designs. Finally, cost comparisons at small values of n showed significant advantage for the MEWMA schemes. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

19.
This article concerns the design of tapers for coupling power between uniform and slow-light periodic waveguides. New optimization methods are described for designing robust tapers, which not only perform well under nominal conditions, but also over a given set of parameter variations. When the set of parameter variations models the inevitable variations typical in the manufacture or operation of the coupler, a robust design is one that will have a high yield, despite these parameter variations. The ideas of successive refinement, and robust optimization based on multi-scenario optimization with iterative sampling of uncertain parameters, using a fast method for approximately evaluating the reflection coefficient, are introduced. Robust design results are compared to a linear taper, and to optimized tapers that do not take parameter variation into account. Finally, robust performance of the resulting designs is verified using an accurate, but much more expensive, method for evaluating the reflection coefficient.  相似文献   

20.
An analogue of the Box-Hunter rotatability property for second order response surface designs in k independent variables is presented. When such designs are used to estimate the first derivatives with respect to each independent variable the variance of the estimated derivative is a function of the coordinates of the point at which the derivative is evaluated and is also a function of the design. By choice of design it is possible to make this variance constant for all points equidistant from the design origin. This property is called slope-rotatability by analogy with the corresponding property for the variance of the estimated response, ?.

For central composite designs slope-rotatability can be achieved simply by adjusting the axial point distances (α), so that the variance of the pure quadratic coefficients is one-fourth the variance of the mixed second order coefficients. Tables giving appropriate values of α have been constructed for 2 ≤ k ≤ 8. For 5 ≤ k ≤ 8 central composite designs involving fractional factorials are used. It is also shown that appreciable advantage is gained by replicating axial points rather than confining replication to the center point only.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号