首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Many classical symmetrical designs have desirable characteristics, one of which is called D-optimality. The D-optimality concept can also be applied to select a design when the classical symmetrical designs cannot be used, such as when the experimental region is not regular in shape, when the number of experiments chosen by a classical design is too large or when one wants to apply models that deviate from the usual first or second order ones. The D-optimality concept is developed and it is also explained that D-optimality is only one possible criterion to choose a particular design. A few other criteria are also given that complement the information obtained by the D-criterion.  相似文献   

2.
Using mean square error as the criterion, we compare two least squares estimates of the Weibull parameters based on non‐parametric estimates of the unreliability with the maximum likelihood estimates (MLEs). The two non‐parametric estimators are that of Herd–Johnson and one recently proposed by Zimmer. Data was generated using computer simulation with three small sample sizes (5, 10 and 15) with three multiply‐censored patterns for each sample size. Our results indicate that the MLE is a better estimator of the Weibull characteristic value, θ, than the least squares estimators considered. No firm conclusions may be made regarding the best estimate of the Weibull shape parameter, although the use of maximum likelihood is not recommended for small sample sizes. Whenever least squares estimation of both Weibull parameters is appropriate, we recommend the use of the Zimmer estimator of reliability. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

3.
This paper considers an experimentation strategy when resource constraints permit only a single design replicate per time interval and one or more design variables are hard to change. The experimental designs considered are two‐level full‐factorial or fractional‐factorial designs run as balanced split plots. These designs are common in practice and appropriate for fitting a main‐effects‐plus‐interactions model, while minimizing the number of times the whole‐plot treatment combination is changed. Depending on the postulated model, single replicates of these designs can result in the inability to estimate error at the whole‐plot level, suggesting that formal statistical hypothesis testing on the whole‐plot effects is not possible. We refer to these designs as balanced two‐level whole‐plot saturated split‐plot designs. In this paper, we show that, for these designs, it is appropriate to use ordinary least squares to analyze the subplot factor effects at the ‘intermittent’ stage of the experiments (i.e., after a single design replicate is run); however, formal inference on the whole‐plot effects may or may not be possible at this point. We exploit the sensitivity of ordinary least squares in detecting whole‐plot effects in a split‐plot design and propose a data‐based strategy for determining whether to run an additional replicate following the intermittent analysis or whether to simply reduce the model at the whole‐plot level to facilitate testing. The performance of the proposed strategy is assessed using Monte Carlo simulation. The method is then illustrated using wind tunnel test data obtained from a NASCAR Winston Cup Chevrolet Monte Carlo stock car. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

4.
Ananda Sen 《技术计量学》2013,55(4):334-344
In this article, I investigate the statistical inference concerning the current (field-stage) reliability of a reliability growth model. The model, assuming a step-intensity structure, evolves from the physical consideration of the Duane learning-curve property and incorporates the effect of a testanalyze- and-fix program that is typically undertaken in a developmental testing program. Both exact and large-sample distributional results are derived for the maximum likelihood and the least squares estimators of the current intensity. Under the assumption that the step-intensity model represents the reality, I provide an assessment of the extent of “misspecification” when the widely used power law process model is fit to the failure data of a system experiencing recurrent failures. Extensive simulation results are carried out to supplement the theoretical findings. An illustration with a dataset is provided as a demonstration of the application of the inference results.  相似文献   

5.
This article concerns the optimization of measurement plans in the design of bivariate degradation tests for bivariate Wiener processes. After describing an unbalanced measurement scheme for bivariate degradation tests, we derive the likelihood function and provide a method for estimating the model parameters that is based on maximum likelihood and least squares. From the corresponding Fisher information matrix, we deduce an important insight, namely that longer degradation tests and longer intervals between measurements in the test design result in more precise parameter estimates. We introduce a model for optimizing the degradation test measurement plan that incorporates practical constraints and objectives in the test design framework. We also present a search‐based algorithm to identify the optimal test measurement plan that is based on the aforementioned measurement rule. Via a simulation study and a case study involving the Rubidium Atomic Frequency Standard, we demonstrate the characteristics of optimal measurement plans for bivariate degradation test design and show the superiority of longer duration tests involving fewer samples compared to alternative designs that specify testing more samples over shorter periods of time. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
Problems requiring regression analysis of censored data arise frequently in practice. For example, in accelerated testing one wishes to relate stress and average time to failure from data including unfailed units, i.e., censored observations.

Maximum likelihood is one method for obtaining the desired estimates; in this paper, we propose an alternative approach. An initial least squares fit is obtained treating the censored values as failures. Then, based upon this initial fit, the expected failure time for each censored observation is estimated. These estimates are then used, instead of the censoring times, to obtain a revised least squares fit and new expected failure times are estimated for the censored values. These are then used in a further least squares fit. The procedure is iterated until convergence is achieved. This method is simpler to implement and explain to non-statisticians than maximum likelihood and appears to have good statistical and convergence properties.

The method is illustrated by an example, and some simulation results are described. Variations and areas for further study also are discussed.  相似文献   

7.
The purpose of this article is to review the possibility of practical use of the concept of D-optimality in response surface design. Second order quasi-D-optimum designs on a cube have been built and their high efficiency proved. Using the computer, a number of different second order response surface designs, previously built, have been compared. It has been proved that the D-optimum concept, being more general than the rotatable one, can be used as the theoretical basis for building and comparing response surface designs in use.  相似文献   

8.
When planning an experimental investigation, we are frequently faced with factors that are difficult or time consuming to manipulate, thereby making complete randomization impractical. A split‐plot structure differentiates between the experimental units associated with these hard‐to‐change factors and those that are relatively easy‐to‐change. Furthermore, it provides an efficient strategy that integrates the restrictions imposed by the experimental apparatus into the design structure. In this paper, several industrial and scientific examples are presented to highlight design considerations when a restriction on randomization is encountered. We propose classes of split‐plot response designs that provide an intuitive and natural extension from the completely randomized context. For these designs, the ordinary least‐squares estimates of the model are equivalent to the generalized least‐squares estimates. This property provides best linear unbiased estimators and simplifies model estimation. The design conditions that provide equivalent estimation are presented and lead to design construction strategies to transform completely randomized Box–Behnken, equiradial and small composite designs into a split‐plot structure. Published in 2006 by John Wiley & Sons, Ltd.  相似文献   

9.
We address the problem of smooth power spectral density estimation of zero-mean stationary Gaussian processes when only a short observation set is available for analysis. The spectra are described by a long autoregressive model whose coefficients are estimated in a Bayesian regularized least squares (RLS) framework accounting the spectral smoothness prior. The critical computation of the tradeoff parameters is addressed using both maximum likelihood (ML) and generalized cross-validation (GCV) criteria in order to automatically tune the spectral smoothness. The practical interest of the method is demonstrated by a computed simulation study in the field of Doppler spectral analysis. In a Monte Carlo simulation study with a known spectral shape, investigation of quantitative indexes such as bias and variance, but also quadratic, logarithmic, and Kullback distances shows interesting improvements with respect to the usual least squares method, whatever the window data length and the signal-to-noise ratio (SNR)  相似文献   

10.
Most procedures that have been proposed to identify dispersion effects in unreplicated factorial designs assume that location effects have been identified correctly. Incorrect identification of location effects may impair subsequent identification of dispersion effects. We develop a method for joint identification of location and dispersion effects that can reliably identify active effects of both types. A normal-based model containing parameters for effects in both the mean and variance is used. Parameters are estimated using maximum likelihood, and subsequent effect selection is done using a specially derived information criterion. An exhaustive search through a limited version of the space of possible models is conducted. Both a single-model output and model averaging are considered. The method is shown to be capable of identifying sensible location-dispersion models that are missed by methods that rely on sequential estimation of location and dispersion effects. Supplementary materials for this article are available online.  相似文献   

11.
In general, modeling data from blocked and split-plot response surface experiments requires the use of generalized least squares and the estimation of two variance components. The literature on the optimal design of blocked and split-plot response surface experiments, however, focuses entirely on the precise estimation of the fixed factor effects and completely ignores the necessity to estimate the variance components as well. To overcome this problem, we propose a new Bayesian optimal design criterion which focuses on both the variance components and the fixed effects. A novel feature of the criterion is that it incorporates prior information about the variance components through log-normal or beta prior distributions. The resulting designs allow for a more powerful statistical inference than traditional optimal designs. In our algorithm for generating optimal blocked and split-plot designs, we implement efficient quadrature approaches for the numerical approximation of the new optimal design criterion. Supplementary materials for this article are available online.  相似文献   

12.
The Weibull shape parameter is important in reliability estimation as it characterizes the ageing property of the system. Hence, this parameter has to be estimated accurately. This paper presents a study of the efficiency of using robust regression methods over the ordinary least‐squares regression method based on a Weibull probability plot. The emphasis is on the estimation of the shape parameter of the two‐parameter Weibull distribution. Both the case of small data sets with outliers and the case of data sets with multiple‐censoring are considered. Maximum‐likelihood estimation is also compared with linear regression methods. Simulation results show that robust regression is an effective method in reducing bias and it performs well in most cases. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

13.
In this paper, a new class of lifetime distribution, which is called Topp–Leone (J-shaped) geometric distribution, is obtained by compound of the Topp–Leone and geometric distributions. Reliability and statistical properties of the new distribution such as quantiles, moment, hazard rate, reversed hazard rate, mean residual life, mean inactivity time, entropies, moment generating function, order statistics and their stochastic orderings are obtained. Estimation of the model parameters by least squares, weighted least squares, maximum likelihood and the observed information matrix are derived. Finally, a real data set is analyzed for illustrative purposes.  相似文献   

14.
This article presents the design of a metamaterial for the shear layer of a nonpneumatic tire using topology optimization, under stress and buckling constraints. These constraints are implemented for a smooth maximum function using global aggregation. A linear elastic finite element model is used, implementing solid isotropic material with penalization. Design sensitivities are determined by the adjoint method. The method of moving asymptotes is used to solve the numerical optimization problem. Two different optimization statements are used. Each requires a compliance limit and some aspect of continuation. The buckling analysis is linear, considering the generalized eigenvalue problem of the conventional and stress stiffness matrices. Various symmetries, base materials, and starting geometries are considered. This leads to novel topologies that all achieve the target effective shear modulus of 10 MPa, while staying within the stress constraint. The stress-only designs generally were susceptible to buckling failure. A family of designs (columnar, noninterconnected representative unit cells) that emerge in this study appears to exhibit favorable properties for this application.  相似文献   

15.
While the orthogonal design of split-plot fractional factorial experiments has received much attention already, there are still major voids in the literature. First, designs with one or more factors acting at more than two levels have not yet been considered. Second, published work on nonregular fractional factorial split-plot designs was either based only on Plackett–Burman designs, or on small nonregular designs with limited numbers of factors. In this article, we present a novel approach to designing general orthogonal fractional factorial split-plot designs. One key feature of our approach is that it can be used to construct two-level designs as well as designs involving one or more factors with more than two levels. Moreover, the approach can be used to create two-level designs that match or outperform alternative designs in the literature, and to create two-level designs that cannot be constructed using existing methodology. Our new approach involves the use of integer linear programming and mixed integer linear programming, and, for large design problems, it combines integer linear programming with variable neighborhood search. We demonstrate the usefulness of our approach by constructing two-level split-plot designs of 16–96 runs, an 81-run three-level split-plot design, and a 48-run mixed-level split-plot design. Supplementary materials for this article are available online.  相似文献   

16.
《Communications, IET》2009,3(1):17-24
The maximum likelihood detection problem in many underdetermined linear communications systems can be described as an underdetermined integer least squares (ILS) problem. To solve it efficiently, a partial regularisation approach is proposed.The original underdetermined ILS problemis first transformed to an equivalent overdetermined ILS problem by using part of the transmit vector to do the regularisation. Then the overdetermined ILS problem is solved by conventional sphere decoding algorithms. Simulation results indicate that this approach can be much more efficient than other approaches for any square constellation higher than 4QAM.  相似文献   

17.
A structure is said to be fully stressed if every member of the structure is stressed to its maximum allowable limit for at least one of the loading conditions. Fully stressed design is most commonly used for small and medium size frames where drift is not a primary concern. There are several potential methods available to the engineer to proportion a fully stressed frame structure. The most commonly used methods are those taught to all structural engineering students and are very easy to understand and to implement. These conventional methods are based on the intuitive idea that if a member is overstressed, it should be made larger. If a member is understressed, it can be made smaller, saving valuable material. It has been found that a large number of distinct fully stressed designs can exist for a single frame structure subjected to multiple loading conditions. This study will demonstrate that conventional methods are unable to converge to many, if not most, of these designs. These unobtainable designs are referred to as ‘repellers’ under the action of conventional methods. Other, more complicated methods can be used to locate these repelling fully stressed designs. For example, Newton's method can be used to solve a non‐linear system of equations that defines the fully stressed state. However, Newton's method can be plagued by divergence and also by convergence to physically meaningless solutions. This study will propose a new fully stressed design technique that does not have these problems. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

18.
In order to reduce the effects of external magnetic fields on the accuracy of magnetic sensor measurements used for the reconstruction of ac electric currents flowing in massive parallel conductors, we use a spatial circular harmonic expansion of the magnetic scalar potential. Thanks to the linearity of the magnetic field problem with respect to the sources, we can then apply the least squares inversion and obtain the set of currents from the knowledge of the magnetic field data collected by the sensor array in the vicinity of the current carrying conductors. Furthermore, we can optimize the positions and the orientations of the magnetic sensors using D-optimality theory and particle swarm optimization.  相似文献   

19.
Experiment plans formed by combining two or more designs, such as orthogonal arrays primarily with 2- and 3-level factors, creating multi-level arrays with subsets of different strength are proposed for computer experiments to conduct sensitivity analysis. Specific illustrations are designs for 5-level factors with fewer runs than generally required for 5-level orthogonal arrays of strength 2 or more. At least 5 levels for each input are desired to allow for runs at a nominal value, 2-values either side of nominal but within a normal, anticipated range, and two, more extreme values either side of nominal. This number of levels allows for a broader range of input combinations to test the input combinations where a simulation code operates. Five-level factors also allow the possibility of up to fourth-order polynomial models for fitting simulation results, at least in one dimension. By having subsets of runs with more than strength 2, interaction effects may also be considered. The resulting designs have a “checker-board” pattern in lower-dimensional projections, in contrast to grid projection that occurs with orthogonal arrays. Space-filling properties are also considered as a basis for experiment design assessment.  相似文献   

20.
ABSTRACT

Improving the quality of a product/process using a computer simulator is a much less expensive option than the real physical testing. However, simulation using computationally intensive computer models can be time consuming and, therefore, directly doing the optimization on the computer simulator can be infeasible. Experimental design and statistical modeling techniques can be used to overcome this problem. This article reviews experimental designs known as space-filling designs that are suitable for computer simulations. In the article, a special emphasis is given for a recently developed space-filling design called maximum projection design. Its advantages are illustrated using a simulation conducted for optimizing a milling process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号