首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Inserting estimates for the missing observations from split‐plot designs restores their balanced or orthogonal structure and alleviates the difficulties in the statistical analysis. In this article, we extend a method due to Draper and Stoneman to estimate the missing observations from unreplicated two‐level factorial and fractional factorial split‐plot (FSP and FFSP) designs. The missing observations, which can either be from the same whole plot, from different whole plots, or comprise entire whole plots, are estimated by equating to zero a number of specific contrast columns equal to the number of the missing observations. These estimates are inserted into the design table and the estimates for the remaining effects (or alias chains of effects as the case with FFSP designs) are plotted on two half‐normal plots: one for the whole‐plot effects and the other for the subplot effects. If the smaller effects do not point at the origin, then different contrast columns to some or all of the initial ones should be discarded and the plots re‐examined for bias. Using examples, we show how the method provides estimates for the missing observations that are very close to their actual values. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

2.
The D‐optimality criterion is often used in computer‐generated experimental designs when the response of interest is binary, such as when the attribute of interest can be categorized as pass or fail. The majority of methods in the generation of D‐optimal designs focus on logistic regression as the base model for relating a set of experimental factors with the binary response. Despite the advances in computational algorithms for calculating D‐optimal designs for the logistic regression model, very few have acknowledged the problem of separation, a phenomenon where the responses are perfectly separable by a hyperplane in the design space. Separation causes one or more parameters of the logistic regression model to be inestimable via maximum likelihood estimation. The objective of this paper is to investigate the tendency of computer‐generated, nonsequential D‐optimal designs to yield separation in small‐sample experimental data. Sets of local D‐optimal and Bayesian D‐optimal designs with different run (sample) sizes are generated for several “ground truth” logistic regression models. A Monte Carlo simulation methodology is then used to estimate the probability of separation for each design. Results of the simulation study confirm that separation occurs frequently in small‐sample data and that separation is more likely to occur when the ground truth model has interaction and quadratic terms. Finally, the paper illustrates that different designs with identical run sizes created from the same model can have significantly different chances of encountering separation.  相似文献   

3.
Life-cycle modeling for design (LCMD) is a methodology for assessing the life-cycle impacts for a complex product with many individual components starting from initial design phases when few design specifications have been made. The methodology combines life-cycle assessment (LCA) with probabilistic design methods in a way that forecasts attributes of possible final designs yet reduces information needs. Specifically, LCMD is a methodology for generating arrays of design scenarios that communicate the range of designs being considered by a design team, and estimating missing data for those design scenarios. The main contribution to enhancing standard LCA is the incorporation of methods to estimate physical attributes of individual components for various design options and in four analyses for evaluating the arrays of design scenarios. An automotive case study presented in part 2 of this work demonstrates one application of LCMD.  相似文献   

4.
Moving average control charts have been presented in the Quality Control literature in the past 75 years; however, their conditional average run lengths have not been obtained. The objective of this article is to derive the autocorrelation function between two moving averages, and then make application of the bivariate normal distribution to compute the conditional type II error probability at the future time, t +1, given that a manufacturing process is in statistical control at the present time t. Our Tables 3 through 8 show that the values of Shewhart's average run length and the corresponding conditional first-order moving average run lengths are almost the same after one standard deviation shift from the target of a normal process mean. Our conclusion section 6 describes that the comparisons of the two average run lengths are not on a valid statistical basis.  相似文献   

5.
When it is known a priori that some contrasts are negligible in a factorial design, their expressions can be used to deduce the missing results. In this article we propose a method for using this procedure when, as in the case of fractional designs, it is not known which contrasts will be null. The method is based on first establishing an interval of possible values corresponding to each of the missing results, then identifying which contrasts are always null independently of the value of said results.  相似文献   

6.
A new generalized probabilistic approach of uncertainties is proposed for computational model in structural linear dynamics and can be extended without difficulty to computational linear vibroacoustics and to computational non‐linear structural dynamics. This method allows the prior probability model of each type of uncertainties (model‐parameter uncertainties and modeling errors) to be separately constructed and identified. The modeling errors are not taken into account with the usual output‐prediction‐error method, but with the nonparametric probabilistic approach of modeling errors recently introduced and based on the use of the random matrix theory. The theory, an identification procedure and a numerical validation are presented. Then a chaos decomposition with random coefficients is proposed to represent the prior probabilistic model of random responses. The random germ is related to the prior probability model of model‐parameter uncertainties. The random coefficients are related to the prior probability model of modeling errors and then depends on the random matrices introduced by the nonparametric probabilistic approach of modeling errors. A validation is presented. Finally, a future perspective is introduced when experimental data are available. The prior probability model of the random coefficients can be improved in constructing a posterior probability model using the Bayesian approach. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

7.
The question considered in this paper is how large does an in‐control reference sample need to be in order to control the effects of using estimated parameters when using a normal‐theory cumulative sum (CUSUM) tracking statistic? Previous research has demonstrated the effect of estimation errors on the conditional in‐control average run length of the CUSUM. The contributions of this paper are simple analytical tools that determine the required reference sample size needed to ensure probabilistic control of the relative error of the conditional in‐control average run length. The availability of these tools rounds out the design phase of the CUSUM by enabling a practical procedure for determining the needed size of the reference sample. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
The problem of ranking and weighting experts' performances when quantitative judgments are being elicited for decision support is considered. A new scoring model, the Expected Relative Frequency model, is presented, based on the closeness between central values provided by the expert and known values used for calibration. Using responses from experts in five different elicitation datasets, a cross-validation technique is used to compare this new approach with the Cooke Classical Model, the Equal Weights model, and individual experts. The analysis is performed using alternative reward schemes designed to capture proficiency either in quantifying uncertainty, or in estimating true central values. Results show that although there is only a limited probability that one approach is consistently better than another, the Cooke Classical Model is generally the most suitable for assessing uncertainties, whereas the new ERF model should be preferred if the goal is central value estimation accuracy.  相似文献   

9.
This paper considers an experimentation strategy when resource constraints permit only a single design replicate per time interval and one or more design variables are hard to change. The experimental designs considered are two‐level full‐factorial or fractional‐factorial designs run as balanced split plots. These designs are common in practice and appropriate for fitting a main‐effects‐plus‐interactions model, while minimizing the number of times the whole‐plot treatment combination is changed. Depending on the postulated model, single replicates of these designs can result in the inability to estimate error at the whole‐plot level, suggesting that formal statistical hypothesis testing on the whole‐plot effects is not possible. We refer to these designs as balanced two‐level whole‐plot saturated split‐plot designs. In this paper, we show that, for these designs, it is appropriate to use ordinary least squares to analyze the subplot factor effects at the ‘intermittent’ stage of the experiments (i.e., after a single design replicate is run); however, formal inference on the whole‐plot effects may or may not be possible at this point. We exploit the sensitivity of ordinary least squares in detecting whole‐plot effects in a split‐plot design and propose a data‐based strategy for determining whether to run an additional replicate following the intermittent analysis or whether to simply reduce the model at the whole‐plot level to facilitate testing. The performance of the proposed strategy is assessed using Monte Carlo simulation. The method is then illustrated using wind tunnel test data obtained from a NASCAR Winston Cup Chevrolet Monte Carlo stock car. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.
A gain-flattening filter (GFF) for minimum manufacturing errors (12 designs submitted) and dense wavelength-division multiplex (DWDM) filters for low group-delay (GD) variation (9 designs submitted) was the subject of a design contest held in conjunction with the Optical Interference Coatings 2001 topical meeting of the Optical Society of America. Results of the contest are given and evaluated. It turned out that the parameter space for GFFs with optimum performance when manufacturing errors are not considered is much different from that when manufacturing errors are considered. DWDM filter solutions with low GD variation are possible.  相似文献   

11.
Inverse techniques based on vibration tests and numerical calculations are sensitive to model errors, which bring discrepancies in the results of the estimated parameters. In the present study, model errors related to geometrical simplifications of a laminated plate were considered. The investigation focused on the they have on dynamic characteristics of the plate. Two types of inverse techniques were then used to show the model errors influence on identified material properties. The first was the inverse technique based on the design of experiments and response surface methodology, whereas the second was the iterative inverse technique. In both methods, an error functional between measured and calculated responses was minimized in order to search for an optimal set of material properties. Dynamic characteristics of the plate mostly affected by the model errors were established from a numerical study. They were then eliminated from the objective function reducing the estimation error of the identified material properties.  相似文献   

12.
When analysing the effects of a factorial design, it is customary to take into account the probability of making a Type I error (the probability of considering an effect significant when it is non‐significant), but not to consider the probability of making a Type II error (the probability of considering an effect as non‐significant when it is significant). Making a Type II error, however, may lead to incorrect decisions regarding the values that the factors should take or how subsequent experiments should be conducted. In this paper, we introduce the concept of minimum effect size of interest and present a visualization method for selecting the critical value of the effects, the threshold value above which an effect should be considered significant, which takes into account the probability of Type I and Type II errors. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

13.
孟非凡  郭秀秀  史庆轩 《工程力学》2022,39(11):133-142+165
大多数惯容系统的研究未考虑间隙非线性的影响,有研究表明,大间隙的产生对系统响应的影响不可忽略。该文建立了含间隙非线性的惯容-橡胶复合隔振系统的随机微分方程,基于随机非线性分析方法,推导了系统响应的统计矩,计算了系统响应的概率密度函数,利用首超可靠性分析理论求得了系统的失效概率,并分析了间隙对系统响应的统计特性及可靠性的影响。同时,也考虑了非平稳激励下间隙非线性对系统响应及可靠性的影响。结果表明,间隙值变大时,系统响应的统计矩变大,概率密度函数曲线快速发散,系统的失效概率迅速增加,这与确定性分析得到的结果不同,在设计隔振器时应当考虑间隙对系统动力可靠性的影响。  相似文献   

14.
Most research in design of experiments focuses on appropriate designs for a system with just one type of response, rather than multiple responses. In a decision-making process, relying on only one objective can lead to oversimplified, suboptimal choices that ignore important considerations. Consequently, the problem of constructing a design for an experiment when multiple types of responses are of interest often does not have a single definitive answer, particularly when the response variables have different distributions. Each of these response distributions imposes different requirements on the experimental design. Computer-generated optimal designs are popular design choices for less standard scenarios where classical designs are not ideal. This work presents a new approach to experimental designs for dual-response systems. The normal and binomial distributions are considered as potential responses. Using the D-criterion for the linear model and the Bayesian D-criterion for the logistic regression model, a weighted criterion is implemented in a coordinate-exchange algorithm. Designs are evaluated and compared across different weights. The sensitivity of the designs to the priors supplied for the Bayesian D-criterion is also explored.  相似文献   

15.
Comparison of dissolution profiles may be facilitated by blocking the individual units of a given batch, thus greatly reducing the possibility of error from variation between experimental runs. Experimental designs are described which allow valid comparisons to be made between batches, as well as allowing the between run variation to he assessed and identifying any systematic errors resulting from differences between vessels. The number of tests reguired may freguently be reduced, and the need for replicate testing eliminated. The limitation of 6 vessels per run imposes certain restrictions in the experimental designs possible. Applications of these experimental designs in characterisation of dosage forms by their pH-dissolution topography and their use in factorial formulation experiments are described.  相似文献   

16.
The performance of attribute control charts that monitor Markov‐dependent data is usually evaluated under the assumption of known process parameters, that is, known values of a the probability an item is nonconforming given the previous item is conforming and b the probability an item is conforming given the previous item is nonconforming. In practice, these parameters are usually not known and are calculated from an in‐control Phase I‐data set. In this paper, a comparison of the in‐control ARL (average run length) properties of the attribute chart for Markov‐dependent data with known and estimated parameters is presented. The probability distribution of the estimators is developed and used to calculate the in‐control ARL and standard deviation of the run length of the chart with estimated parameters. For particular values of a and b, the in‐control ARL values of the charts with estimated parameters may be very different than those with known parameters. The size of the Phase‐I data set needed for charts with estimated parameters to exhibit the same in‐control ARL properties as those with known parameters may vary widely depending on the parameters of the process, but in general, large samples are needed to obtain accurate estimates. As the Phase‐I sample size increases, the in‐control ARL values of the charts with estimated parameters approach that of the known parameter case but not in a monotonic fashion as in the case of the X‐bar chart. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
Space‐filling designs allow for exploration of responses with many different settings for each input factor. While much research has been done using rectangular design spaces, it is not uncommon to have constraints on the design region where some combinations are impossible or undesirable to run. In this article, we present an intuitive method for quickly generating space‐filling designs that have the flexibility to accommodate nonrectangular design regions. We also show that these designs perform favorably compared with other standard designs with respect to the average distance of an arbitrary point in space to the closest design point. This property holds even when the design region is rectangular. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
钢质天线结构风振疲劳寿命估算方法比较研究   总被引:1,自引:0,他引:1  
根据一屋顶钢质天线结构气弹模型风洞试验结果,计算了疲劳分析关键点处的应力响应功率谱;基于疲劳累积损伤理论在频域上用等效窄带法对钢天线进行寿命估算;同时,根据关键点处的应力响应功率谱,用Monte Carlo方法模拟了应力时程响应,并用雨流法对应力时程响应进行计数,用Goodman法则考虑平均应力的影响后得到应力范围出现的概率,然后拟合了应力范围的概率密度函数,对天线进行寿命估计.结果表明,采用两种方法估算的结构风致疲劳寿命相近;平均风对结构的疲劳寿命影响很小.  相似文献   

19.
Accidents caused by domino effect are among the more severe that took place in the chemical and process industry. However, a well established and widely accepted methodology for the quantitative assessment of domino accidents contribution to industrial risk is still missing. Hence, available data on damage to process equipment caused by blast waves were revised in the framework of quantitative risk analysis, aiming at the quantitative assessment of domino effects caused by overpressure. Specific probit models were derived for several categories of process equipment and were compared to other literature approaches for the prediction of probability of damage of equipment loaded by overpressure. The results evidence the importance of using equipment-specific models for the probability of damage and equipment-specific damage threshold values, rather than general equipment correlation, which may lead to errors up to 500%.  相似文献   

20.
《技术计量学》2013,55(3):436-444
Goodness-of-fit tests are proposed for the assumption of normality of random errors in experimental designs where the variance of the response may vary with the levels of the covariates. The exact distribution of standardized residuals is used to make the probability integral transform for use in tests based on the empirical distribution function. A different mean and variance is estimated for each level of the covariate; corresponding large sample theory is provided. The proposed tests are robust to a possible misspecification of the model and permit data collected from several similar experiments to be pooled to improve the power of the test.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号