共查询到20条相似文献,搜索用时 0 毫秒
1.
ABSTRACTWhen modeling the reliability of a system or component, it is not uncommon for more than one expert to provide very different prior estimates of the expected reliability as a function of an explanatory variable such as age or temperature. Our goal is to incorporate all information from the experts when choosing a design about which units to test. Bayesian design of experiments has been shown to be very successful for generalized linear models, including logistic regression models. We use this approach to develop methodology for the case where there are several potentially non-overlapping priors under consideration. While multiple priors have been used for analysis in the past, they have never been used in a design context. The Weighted Priors method performs well for a broad range of true underlying model parameter choices and is more robust when compared to other reasonable design choices. We illustrate the method through multiple scenarios and a motivating example. Additional figures for this article are available in the online supplementary information. 相似文献
2.
The objective of the present study was to develop a tablet formulation with a zero-order drug release profile based on a balanced blend of three matrix ingredients. To accomplish this goal, a 17-run, three-factor, two-level D-Optimal mixture design was employed to evaluate the effect of Polyox™ (X1), Carbopol® (X2), and lactose (X3) concentrations on the release rate of theophylline from the matrices. Tablets were prepared by direct compression and were subjected to an in vitro dissolution study in phosphate buffer at pH 7.2. Polynomial models were generated for the responses Y4 (percent released in 8 h) and Y6 (similarity factor or f2). Fitted models were used to predict the composition of a formulation that would have a similar dissolution profile to an ideal zero-order release at a rate of 8.33% per hour. When tested, dissolution profile of the optimized formulation was comparable to the reference profile (f2 was 74.2, and n [release exponent] was 0.9). This study demonstrated that a balanced blend of matrix ingredients could be used to attain a zero-order release profile. Optimization was feasible by the application of response surface methodology, which proved efficient in designing controlled-release dosage forms. 相似文献
3.
Caleb King Bradley Jones Joseph Morgan Ryan Lekivetz 《Quality and Reliability Engineering International》2020,36(3):797-816
In developing screening experiments for two-level factors, practitioners typically are familiar with regular fractional factorial designs, which are orthogonal, globally D-optimal (ie, 100% D-efficient), and exist if is a power of two. In addition, nonregular D-optimal orthogonal designs can be generated for almost any a multiple of four, the most notable being the family of Plackett and Burman1 designs. If resource constraints dictate that is not a multiple of four, while an orthogonal design for two-level factors does not exist, one can still consider a D-optimal design. Exchange algorithms are available in commercial computer software for creating highly D-efficient designs. However, as the number of factors increases, computer searches eventually fail to find the globally optimal design for any or require impractical search times. In this article, we compile state-of-the-art direct construction methods from the literature for producing globally D-optimal designs for virtually any number of two-level factors as well as any greater than the number of factors. We summarize the known methods as well as areas for continued research, with the intention of catalyzing research in extending these construction methods. 相似文献
4.
Additive manufacturing (AM) offers numerous benefits for innovative design solutions. However, engineers are currently not supported in identifying and incorporating these potentials systematically in their design solutions. In this paper, previous Design for Additive Manufacturing (DfAM) approaches are first reviewed comprehensively and classified into distinct categories according to their main purpose and application. They are then analysed further by being related to conventional design methodologies like VDI 2221. Since previous DfAM approaches only provide selective assistance at single steps in the product development process, a new framework for DfAM is proposed. Existing methods and tools, both from DfAM and from general design methodologies, are integrated into the modular framework structure. A concept for using the framework is presented to provide design engineers with continuous support in all product development phases, thereby fostering the complete exploitation of AM potentials and the development of AM-conformal designs. 相似文献
5.
Anne C. Shoemaker Raghu N. Kacker 《Quality and Reliability Engineering International》1988,4(2):95-103
Robust design is an important method for improving product manufacturability and life, and for increasing manufacturing process stability and yield. In 1980 Genichi Taguchi introduced his approach to using statistically planned experiments in robust product and process design to U.S. industry. Since then, the robust design problem and Taguchi's approach to solving it has received much attention from product designers, manufacturers, statisticians and quality professionals. Although most agree on the importance of the robust design problem, controversy over some of the specific methods used to solve the problem has made this an active research area. Although the answers are not all in yet, the importance of the problem has led to development of a four-step methodology for implementing robust design. The steps are (1) formulate the problem by stating objectives and then listing and classifying product or process variables, (2) plan an experiment to study these variables, (3) identify improved settings of controllable variables from the experiment's results and (4) confirm the improvement in a small follow-up experiment. This paper presents a methodology for the problem formulation and experiment planning steps. We give practical guidelines for making key decisions in these two steps, including choice of response characteristics, and specification of interactions and test levels for variables. We describe how orthogonal arrays and interaction graphs can be used to simplify the process of planning an experiment. We also compare the experiment planning strategies we are recommending to those of Taguchi and to more traditional approaches. 相似文献
6.
7.
Guillermo de León Pere Grima Xavier Tort‐Martorell 《Quality and Reliability Engineering International》2011,27(4):489-497
When the results of an experimental design are analyzed, in which control factors and noise factors are involved, it may be difficult to determine the combination of values of the control factors that produce the best behavior of the response, addressing both its level (or distance from the target value) and its variability. This article presents an analysis proposal that is based on the model obtained for the response and uses, as its central element, a scatter plot of its expected value vs its standard deviation. In this plot, each point corresponds to a combination of values of the control factors; thus, it is easy to identify the points with better response behavior. In our opinion, this graph provides significant advantages over the other methods that have been proposed; among them is the fact that it is always a scatter plot, regardless of the number of factors that end up being active, and that it is easy to understand and use, especially with the possibilities offered by the current statistical software packages. Copyright © 2010 John Wiley & Sons, Ltd. 相似文献
8.
Sangmun Shin 《工程优选》2013,45(11):989-1009
Many practitioners and researchers have implemented robust design and tolerance design as quality improvement and process optimization tools for more than two decades. Robust design is an enhanced process/product design methodology for determining the best settings of control factors while minimizing process bias and variability. Tolerance design is aimed at determining the best tolerance limits for minimizing the total cost incurred by both the customer and manufacturer by balancing quality loss due to variations in product performance and the cost of controlling these variations. Although robust design and tolerance design have received much attention from researchers and practitioners, there is ample room for improvement. First, most researchers consider robust design and tolerance design as separate research fields. Second, most research work is based on a single quality characteristic. The primary goal of this paper is to integrate a sequential robust design–tolerance design optimization procedure within a bi-objective paradigm, which, the authors believe, is the first attempt in the robust design and tolerance design literature. Models are proposed and numerical examples along with sensitivity analysis are performed for verification purposes. 相似文献
9.
Within engineering design, optimization often involves building models of working systems to improve design objectives such as performance, reliability and cost. Bond graph models express systems in terms of energy flow and can be used to identify key factors that influence system behaviour. Robust Engineering Design (RED) is a strategy for the optimization of systems through experimentation and empirical modelling; however, experiments can often be prohibitively expensive for large or complex systems. By using bond graphs as a front‐end to RED, experiments on systems could be designed more efficiently, reducing the number of experiments required for accurate empirical modelling. Two case study examples are given which show that bond graphs can be used to good effect in the empirical analysis of engineering systems. Copyright © 2000 John Wiley & Sons, Ltd. 相似文献
10.
Conventional space-filling experimental design provides uniform coverage of a hypercube design space. When constraints are imposed, the results may contain many infeasible points. Simply omitting these points leads to fewer feasible points than desired and a design of experiments that is not optimally distributed. In this research, an adaptive method is developed to create space-filling points in arbitrarily constrained spaces. First, a design space reconstruction method is developed to reduce the invalid exploration space and enhance the efficiency of experimental designs. Then, a synthetic criterion of uniformity and feasibility is proposed and optimized by the enhanced stochastic evolutionary method to obtain the initial sampling combination. Finally, an adaptive adjustment strategy of design levels is constructed to obtain the required number of feasible points. Various test cases with convex and non-convex, connected and non-connected design spaces are implemented to verify the efficacy of the proposed method. 相似文献
11.
The management of bacterial diseases calls for a detailed knowledge about the dynamic changes in host–bacteria interactions. Biological insights are gained by integrating experimental data with mechanistic mathematical models to infer experimentally unobservable quantities. This inter-disciplinary field would benefit from experiments with maximal information content yielding high-precision inference. Here, we present a computationally efficient tool for optimizing experimental design in terms of parameter inference in studies using isogenic-tagged strains. We study the effect of three experimental design factors: number of biological replicates, sampling timepoint selection and number of copies per tagged strain. We conduct a simulation study to establish the relationship between our optimality criterion and the size of parameter estimate confidence intervals, and showcase its application in a range of biological scenarios reflecting different dynamics patterns observed in experimental infections. We show that in low-variance systems with low killing and replication rates, predicting high-precision experimental designs is consistently achieved; higher replicate sizes and strategic timepoint selection yield more precise estimates. Finally, we address the question of resource allocation under constraints; given a fixed number of host animals and a constraint on total inoculum size per host, infections with fewer strains at higher copies per strain lead to higher-precision inference. 相似文献
12.
A number of multi-objective evolutionary algorithms have been proposed in recent years and many of them have been used to solve engineering design optimization problems. However, designs need to be robust for real-life implementation, i.e. performance should not degrade substantially under expected variations in the variable values or operating conditions. Solutions of constrained robust design optimization problems should not be too close to the constraint boundaries so that they remain feasible under expected variations. A robust design optimization problem is far more computationally expensive than a design optimization problem as neighbourhood assessments of every solution are required to compute the performance variance and to ensure neighbourhood feasibility. A framework for robust design optimization using a surrogate model for neighbourhood assessments is introduced in this article. The robust design optimization problem is modelled as a multi-objective optimization problem with the aim of simultaneously maximizing performance and minimizing performance variance. A modified constraint-handling scheme is implemented to deal with neighbourhood feasibility. A radial basis function (RBF) network is used as a surrogate model and the accuracy of this model is maintained via periodic retraining. In addition to using surrogates to reduce computational time, the algorithm has been implemented on multiple processors using a master–slave topology. The preliminary results of two constrained robust design optimization problems indicate that substantial savings in the actual number of function evaluations are possible while maintaining an acceptable level of solution quality. 相似文献
13.
An improved genetic algorithm (IGA) is presented to solve the mixed-discrete-continuous design optimization problems. The IGA approach combines the traditional genetic algorithm with the experimental design method. The experimental design method is incorporated in the crossover operations to systematically select better genes to tailor the crossover operations in order to find the representative chromosomes to be the new potential offspring, so that the IGA approach possesses the merit of global exploration and obtains better solutions. The presented IGA approach is effectively applied to solve one structural and five mechanical engineering problems. The computational results show that the presented IGA approach can obtain better solutions than both the GA-based and the particle-swarm-optimizer-based methods reported recently. 相似文献
14.
Daniel Busby 《Reliability Engineering & System Safety》2009,94(7):1183-1193
Large computer simulators have usually complex and nonlinear input output functions. This complicated input output relation can be analyzed by global sensitivity analysis; however, this usually requires massive Monte Carlo simulations. To effectively reduce the number of simulations, statistical techniques such as Gaussian process emulators can be adopted. The accuracy and reliability of these emulators strongly depend on the experimental design where suitable evaluation points are selected. In this paper a new sequential design strategy called hierarchical adaptive design is proposed to obtain an accurate emulator using the least possible number of simulations. The hierarchical design proposed in this paper is tested on various standard analytic functions and on a challenging reservoir forecasting application. Comparisons with standard one-stage designs such as maximin latin hypercube designs show that the hierarchical adaptive design produces a more accurate emulator with the same number of computer experiments. Moreover a stopping criterion is proposed that enables to perform the number of simulations necessary to obtain required approximation accuracy. 相似文献
15.
Timothy A. Cleaver Alex J. Gutman Christopher L. Martin Mark F. Reeder 《Quality Engineering》2016,28(3):280-292
This article presents an application of design of experiments (DOE) in a computational fluid dynamics (CFD) environment to study forces and moments acting on a missile through various speeds and angles of attack. Researchers employed a four-factor Latin hypercube space-filling design and the Gaussian Process to build a surrogate model of the CFD environment. The surrogate model was used to characterize missile aerodynamic coefficients across the transonic flight regime. The DOE process completed the task with less computational resources than a traditional one-factor-at-a-time (OFAT) approach. To validate the surrogate model, specific OFAT angle of attack sweeps were performed. This provided a direct comparison between the Gaussian Process model and OFAT analysis. In most cases, the surrogate computer model was able to accurately capture the nonlinear response variables. Moreover, the surrogate model enabled a dynamic prediction tool that could investigate untested scenarios, a capability not available with OFAT. The DOE process consequently received support from engineers who do not typically use DOE. 相似文献
16.
The quality by design (QbD) paradigm guides the pharmaceutical industry towards improved understanding of products and processes, and at the same time facilitates a high degree of manufacturing and regulatory flexibility throughout the establishment of the design space. This review article presents scientific, statistical and regulatory considerations in design space development. All key development milestones, starting with planning, selection of factors, experimental execution, data analysis, model development and assessment, verification, and validation, and ending with design space submission, are presented and discussed. The focus is especially on frequently ignored topics, like management of factors and CQAs that will not be included in experimental design, evaluation of risk of failure on design space edges, or modeling scale-up strategy. Moreover, development of a design space that is independent of manufacturing scale is proposed as the preferred approach. 相似文献
17.
Affective design and the determination of engineering specifications are commonly conducted separately in early product design stage. Generally, designers and engineers are required to determine the settings of design attributes (for affective design) and engineering requirements (for engineering design), respectively, for new products. Some design attributes and some engineering requirements could be common. However, the settings of the design attributes and engineering requirements could be different because of the separation of the two processes. In previous studies, a methodology that considers the determination of the settings of the design attributes and engineering requirements simultaneously was not found. To bridge this gap, a methodology for considering affective design and the determination of engineering specifications of a new product simultaneously is proposed. The proposed methodology mainly involves generation of customer satisfaction models, formulation of a multi-objective optimisation model and its solving using a chaos-based NSGA-II. To illustrate and validate the proposed methodology, a case study of mobile phone design was conducted. A validation test was conducted and the test results showed that the customer satisfaction values obtained based on the proposed methodology were higher than those obtained based on the combined standalone quality function deployment and standalone affective design approach. 相似文献
18.
The basic requirement in this type of micro-drilling process is to achieve high product quality with the minimum machining cost, which can be realised through parameter design. In this paper, we propose a new economic parameter design under the framework of Bayesian modelling and optimisation. First of all, the Bayesian seemingly unrelated regression (SUR) models are utilised to develop the relationship models between input factors and output responses in the laser micro-drilling process. After that, simulated response values which reflect the real laser micro-drilling process are obtained by using the Gibbs sampling procedure. Moreover, a novel rejection cost function and a quality loss function are constructed based on the simulated responses. Finally, an optimisation scheme integrating the rejection cost (i.e. rework cost and scrap cost) function and the quality loss function is implemented by using multi-objective genetic algorithm to find feasible economic parameter settings for laser micro-drilling process. 相似文献
19.
When experiments are carried out over a period of time, the response may be subject to time trends. We use an algorithm for exact optimum designs to construct a series of designs resistant to linear and quadratic trend. Designs considered include the allocation of simple treatments, multifactor designs in qualitative or quantitative factors, and response surface designs. The investigation is extended from consideration of linear ordering in time to include designs with several trials at each time point, designs in which several trials are spread over only one out of three shifts per day, and designs in which there are more time points than experiments. Comparisons with designs in the absence of trend show that surprisingly little information is lost by designing for protection against a potential trend. The BT algorithm for obtaining these designs is outlined. 相似文献
20.
The robust optimisation is performed in the preliminary design phase dealing with analytic models. The analytic models come either from the finite element models or from the physical laws approximation. The variability on the design parameters is defined using random variables identified by their first two Moments, the Mean and the Standard deviation. A robust design approach is proposed that determines whether a robust design solution exists or not to the given design problem. This approach combines a reformulation of the analytic model with the new design specifications. It integrates the parameter uncertainties (Mean and Standard deviation) and a deterministic optimisation algorithm (SQP algorithm). The Means and the Standard deviation are computed using the Propagation of Variance method. The engineering application of an electrical actuator design is introduced and used to show the implementation and the effectiveness of the proposed robust approach. 相似文献