首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This paper considers an experimentation strategy when resource constraints permit only a single design replicate per time interval and one or more design variables are hard to change. The experimental designs considered are two‐level full‐factorial or fractional‐factorial designs run as balanced split plots. These designs are common in practice and appropriate for fitting a main‐effects‐plus‐interactions model, while minimizing the number of times the whole‐plot treatment combination is changed. Depending on the postulated model, single replicates of these designs can result in the inability to estimate error at the whole‐plot level, suggesting that formal statistical hypothesis testing on the whole‐plot effects is not possible. We refer to these designs as balanced two‐level whole‐plot saturated split‐plot designs. In this paper, we show that, for these designs, it is appropriate to use ordinary least squares to analyze the subplot factor effects at the ‘intermittent’ stage of the experiments (i.e., after a single design replicate is run); however, formal inference on the whole‐plot effects may or may not be possible at this point. We exploit the sensitivity of ordinary least squares in detecting whole‐plot effects in a split‐plot design and propose a data‐based strategy for determining whether to run an additional replicate following the intermittent analysis or whether to simply reduce the model at the whole‐plot level to facilitate testing. The performance of the proposed strategy is assessed using Monte Carlo simulation. The method is then illustrated using wind tunnel test data obtained from a NASCAR Winston Cup Chevrolet Monte Carlo stock car. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
When experimental resources are significantly constrained, resolution V fractional factorial designs are often prohibitively large for experiments with 6 or more factors. Resolution IV designs may also be cost prohibitive, as additional experimentation may be required to de‐alias active 2‐factor interactions (2FI). This paper introduces 20‐run no‐confounding screening designs for 6 to 12 factors as alternatives to resolution IV designs. No‐confounding designs have orthogonal main effects, and since no 2FI is completely confounded with another main effects or 2FI, the experimental results can be analyzed without follow‐on experimentation. The paper concludes with the results of a Monte Carlo simulation used to assess the model‐fitting accuracy of the recommended designs.  相似文献   

3.
Blocking is commonly used in experimental design to eliminate unwanted variation by creating more homogeneous conditions for experimental treatments within each block. While it has been a standard practice in experimental design, blocking fractional factorials still presents many challenges due to differences between treatment and blocking variables. Lately, new design criteria such as the total number of clear effects and fractional resolution have been proposed to design blocked two‐level fractional factorial experiments. This article presents a flexible matrix representation for two‐level fractional factorials that will allow experimenters and software developers to block such experiments based on any design criterion that is suitable with the experimental conditions. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

4.
C. Zhao  H. Matsuda  C. Morita  M. R. Shen 《Strain》2011,47(5):405-413
Abstract: The failure strength model of brittle materials with a pre‐existing open‐hole defect is proposed in this paper. A modified Sammis–Ashby model is deduced, in which it can be used to calculate the peak strength of brittle materials. It shows the law between peak strength σp and independent variable μ, which is the ratio of open‐hole radius (a) to half‐width of the specimen (t). Moreover, numerical and experimental investigations on failure process of rock‐like materials with an open‐hole imperfection were carried out. In the experiments, 3D‐digital image correlation method, an optical technique which utilises the full‐field and non‐contact measurement, was employed. A progressive elastic damage method realistic failure process analysis (RFPA) was used in the numerical investigation to inspect and verify the modified model and simulate the failure process. The investigation finds that there are good correlations between the experimental, numerical and theoretical results. Moreover, because of the influences of boundary conditions, shear failure type was obtained both experimentally and numerically.  相似文献   

5.
Most preset response surface methodology (RSM) designs offer ease of implementation and good performance over a wide range of process and design optimization applications. These designs often lack the ability to adapt the design on the basis of the characteristics of application and experimental space so as to reduce the number of experiments necessary. Hence, they are not cost‐effective for applications where the cost of experimentation is high or when the experimentation resources are limited. In this paper, we present an adaptive sequential response surface methodology (ASRSM) for industrial experiments with high experimentation cost, limited experimental resources, and high design optimization performance requirement. The proposed approach is a sequential adaptive experimentation approach that combines concepts from nonlinear optimization, design of experiments, and response surface optimization. The ASRSM uses the information gained from the previous experiments to design the subsequent experiment by simultaneously reducing the region of interest and identifying factor combinations for new experiments. Its major advantage is the experimentation efficiency such that for a given response target, it identifies the input factor combination (or containing region) in less number of experiments than the classical single‐shot RSM designs. Through extensive simulated experiments and real‐world case studies, we show that the proposed ASRSM method outperforms the popular central composite design method and compares favorably with optimal designs. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
In many industrial experiments there are restrictions on the resource (or cost) required for performing the runs in a response surface design. This will require practitioners to choose some subset of the candidate set of experimental runs. The appropriate selection of design points under resource constraints is an important aspect of multi‐factor experimentation. A well‐planned experiment should consist of factor‐level combinations selected such that the resulting design will have desirable statistical properties but the resource constraints should not be violated or the experimental cost should be minimized. The resulting designs are referred to as cost‐efficient designs. We use a genetic algorithm for constructing cost‐constrained G‐efficient second‐order response surface designs over cuboidal regions when an experimental cost at a certain factor level is high and a resource constraint exists. Consideration of practical resource (or cost) restrictions and different cost structures will provide valuable information for planning effective and economical experiments when optimizing statistical design properties. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

7.
This research presents a new method to generate near‐orthogonal balanced mixed‐level fractional designs. The proposed method showed that it is possible to create near‐orthogonal balanced fractions of economic size. The method is based in the analysis of the behavior of the genetic algorithm used to generate the efficient arrays (EAs) developed by Guo; a pattern was detected, and this led to the generation of an algorithm capable of constructing fractions in a simple way. These fractions were called near‐orthogonal balanced arrays (NOBAs). To analyze the properties of the NOBAs and the capabilities of the proposed method, a series of performance indicators were defined. The NOBAs were compared with the EAs developed by Guo; results are provided.  相似文献   

8.
This paper describes a p‐hierarchical adaptive procedure based on minimizing the classical energy norm for the scaled boundary finite element method. The reference solution, which is the solution of the fine mesh formed by uniformly refining the current mesh element‐wise one order higher, is used to represent the unknown exact solution. The optimum mesh is assumed to be obtained when each element contributes equally to the global error. The refinement criteria and the energy norm‐based error estimator are described and formulated for the scaled boundary finite element method. The effectivity index is derived and used to examine quality of the proposed error estimator. An algorithm for implementing the proposed p‐hierarchical adaptive procedure is developed. Numerical studies are performed on various bounded domain and unbounded domain problems. The results reflect a number of key points. Higher‐order elements are shown to be highly efficient. The effectivity index indicates that the proposed error estimator based on the classical energy norm works effectively and that the reference solution employed is a high‐quality approximation of the exact solution. The proposed p‐hierarchical adaptive strategy works efficiently. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

9.
Continuous improvement of the quality of industrial products is an essential factor in modern‐day manufacturing. The investigation of those factors that affect process mean and process dispersion (standard deviation) is an important step in such improvements. Most often, experiments are executed for such investigations. To detect mean factors, I use the usual analysis of variance on the experimental data. However, there is no unified method to identify dispersion factors. In recent years several methods have been proposed for identifying such factors with two levels. Multilevel factors, especially three‐level factors, are common in industrial experiments, but we lack methods for identifying dispersion effects in multilevel factors. In this paper, I develop a method for identifying dispersion effects from general fractional factorial experiments. This method consists of two stages. The first stage involves the identification of mean factors using the performance characteristic as the response. The second stage involves the computation of a dispersion measure and the identification of dispersion factors using the dispersion measure as the response. The sequence for identifying dispersion factors is first to test the significance of the total dispersion effect of a factor, then to test the dispersion contrasts of interest, which is a method similar to the typical post hoc testing procedure in the ANOVA analysis. This familiar approach should be appealing to practitioners. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

10.
Fracture characterization under mode I loading of a cement‐based material using the single‐edge‐notched beam loaded in tree‐point‐bending was performed. A new method based on beam theory and crack equivalent concept is proposed to evaluate the Resistance‐curve, which is essential to determine fracture toughness with accuracy. The method considers the existence of a stress relief region in the vicinity of the crack, dispensing crack length monitoring during experiments. A numerical validation was performed by finite element analysis considering a bilinear cohesive damage model. Experimental tests were performed in order to validate the numerical procedure. Digital image correlation technique was used to measure the specimen displacement with accuracy and without interference. Excellent agreement between numerical and experimental load–displacement curves was obtained, which validates the procedure.  相似文献   

11.
Structural robust optimization problems are often solved via the so‐called Bi‐level approach. This solution procedure often involves large computational efforts and sometimes its convergence properties are not so good because of the non‐smooth nature of the Bi‐level formulation. Another problem associated with the traditional Bi‐level approach is that the confidence of the robustness of the obtained solutions cannot be fully assured at least theoretically. In the present paper, confidence single‐level non‐linear semidefinite programming (NLSDP) formulations for structural robust optimization problems under stiffness uncertainties are proposed. This is achieved by using some tools such as Sprocedure and quadratic embedding for convex analysis. The resulted NLSDP problems are solved using the modified augmented Lagrange multiplier method which has sound mathematical properties. Numerical examples show that confidence robust optimal solutions can be obtained with the proposed approach effectively. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

12.
A model‐based scheme is proposed for monitoring multiple gamma‐distributed variables. The procedure is based on the deviance residual, which is a likelihood ratio statistic for detecting a mean shift when the shape parameter is assumed to be unchanged and the input and output variables are related in a certain manner. We discuss the distribution of this statistic and the proposed monitoring scheme. An example involving the advance rate of a drill is used to illustrate the implementation of the deviance residual monitoring scheme. Finally, a simulation study is performed to compare the average run length (ARL) performance of the proposed method to the standard Shewhart control chart for individuals. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

13.
Processes with multiple correlated categorical quality characteristics are called multivariate categorical processes. These processes are usually shown by contingency tables and are characterized by log‐linear models. In this paper, two monitoring approaches including likelihood ratio test (LRT) and F test are developed to monitor multivariate categorical processes based on the contingency table in Phase I. In addition, a change point estimator for multivariate categorical processes is developed in Phase I. The performances of the two proposed approaches are evaluated in terms of probability of signal, while the performance of the proposed change point estimator is evaluated in terms of accuracy and precision criteria through simulation experiments. Meanwhile, we compare the performance of the two proposed control charts with an existing control chart called “?2LRT” control chart for multivariate categorical processes. In the end, a typical application of the proposed methods is illustrated in a real‐world health care system.  相似文献   

14.
We have developed the Correlation‐based Adaptive Predictive Search (CAPS) as a fast search strategy for multidimensional template matching. A 2D template is analyzed, and certain characteristics are computed from its autocorrelation. The extracted information is then used to speed up the search procedure. This method provides a significant improvement in computation time while retaining the accuracy of traditional full‐search matching. We have extended CAPS to three and higher dimensions. An example of the third dimension is rotation where rotated targets can be located while again substantially reducing the computational requirements. CAPS can also be applied in multiple steps to further speed up the template matching process. Experiments were conducted to evaluate the performance of 2D, 3D, and multiple‐step CAPS algorithms. Compared to the conventional full‐search method, we achieved speedup ratios of up to 66.5 and 145 with 2D and 3D CAPS, respectively. © 2003 Wiley Periodicals, Inc. Int J Imaging Syst Technol 13, 169–178, 2003; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10055  相似文献   

15.
16.
In this paper four multiple scale methods are proposed. The meshless hierarchical partition of unity is used as a multiple scale basis. The multiple scale analysis with the introduction of a dilation parameter to perform multiresolution analysis is discussed. The multiple field based on a 1‐D gradient plasticity theory with material length scale is also proposed to remove the mesh dependency difficulty in softening/localization problems. A non‐local (smoothing) particle integration procedure with its multiple scale analysis are then developed. These techniques are described in the context of the reproducing kernel particle method. Results are presented for elastic‐plastic one‐dimensional problems and 2‐D large deformation strain localization problems to illustrate the effectiveness of these methods. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

17.
Statistical process control is an important tool to monitor and control a process. It is used to ensure that the manufacturing process operates in the in‐control state. Multi‐variety and small batch production runs are common in manufacturing environments like flexible manufacturing systems and Just‐in‐Time systems, which are characterized by a wide variety of mixed products with small volume for each kind of production. It is difficult to apply traditional control charts efficiently and effectively in such environments. The method that control charts are plotted for each individual part is not proper, since the successive state of the manufacturing process cannot be reflected. In this paper, a proper t‐chart is proposed for implementation in multi‐variety and small batch production runs to monitor the process mean, and its statistical properties are evaluated. The run length distribution of the proposed t‐chart has been obtained by modelling the multi‐variety process. The ARL performance for various shifts, number of product types, and subgroup sizes has also been obtained. The results show that the t‐chart can be successfully implemented to monitor a multi‐variety production run. Finally, illustrative examples show that the proposed t‐chart is effective in multi‐variety and small batch manufacturing environment. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
This paper presents a comprehensive finite‐element modelling approach to electro‐osmotic flows on unstructured meshes. The non‐linear equation governing the electric potential is solved using an iterative algorithm. The employed algorithm is based on a preconditioned GMRES scheme. The linear Laplace equation governing the external electric potential is solved using a standard pre‐conditioned conjugate gradient solver. The coupled fluid dynamics equations are solved using a fractional step‐based, fully explicit, artificial compressibility scheme. This combination of an implicit approach to the electric potential equations and an explicit discretization to the Navier–Stokes equations is one of the best ways of solving the coupled equations in a memory‐efficient manner. The local time‐stepping approach used in the solution of the fluid flow equations accelerates the solution to a steady state faster than by using a global time‐stepping approach. The fully explicit form and the fractional stages of the fluid dynamics equations make the system memory efficient and free of pressure instability. In addition to these advantages, the proposed method is suitable for use on both structured and unstructured meshes with a highly non‐uniform distribution of element sizes. The accuracy of the proposed procedure is demonstrated by solving a basic micro‐channel flow problem and comparing the results against an analytical solution. The comparisons show excellent agreement between the numerical and analytical data. In addition to the benchmark solution, we have also presented results for flow through a fully three‐dimensional rectangular channel to further demonstrate the application of the presented method. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

19.
Quantitative parameter mapping in MRI is typically performed as a two‐step procedure where serial imaging is followed by pixelwise model fitting. In contrast, model‐based reconstructions directly reconstruct parameter maps from raw data without explicit image reconstruction. Here, we propose a method that determines T1 maps directly from multi‐channel raw data as obtained by a single‐shot inversion‐recovery radial FLASH acquisition with a Golden Angle view order. Joint reconstruction of a T1, spin‐density and flip‐angle map is formulated as a nonlinear inverse problem and solved by the iteratively regularized Gauss‐Newton method. Coil sensitivity profiles are determined from the same data in a preparatory step of the reconstruction. Validations included numerical simulations, in vitro MRI studies of an experimental T1 phantom, and in vivo studies of brain and abdomen of healthy subjects at a field strength of 3 T. The results obtained for a numerical and experimental phantom demonstrated excellent accuracy and precision of model‐based T1 mapping. In vivo studies allowed for high‐resolution T1 mapping of human brain (0.5–0.75 mm in‐plane, 4 mm section thickness) and liver (1.0 mm, 5 mm section) within 3.6–5 s. In conclusion, the proposed method for model‐based T1 mapping may become an alternative to two‐step techniques, which rely on model fitting after serial image reconstruction. More extensive clinical trials now require accelerated computation and online implementation of the algorithm. © 2016 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 26, 254–263, 2016  相似文献   

20.
Stress‐strain curves of AAC at high temperatures: a first step toward the performance‐based design according to EN 1996‐1‐2 In this paper, the performance‐based approach for the design of autoclaved aerated concrete (AAC) masonry walls subjected to fire is presented. The problems associated with the calculation methods in the current version of EN 1996‐1‐2 for the assessment of AAC loadbearing walls are explained. The current version of EN 1996‐1‐2 offers only tabulated data as a reliable method for structural fire assessment. The content of current Annex C and D is generally considered as not being reliable for design because of the absence of an adequate validation by experimental tests. For this reason, a proposal is made for the improvement of the input parameters for mechanical models based on experimental tests on AAC masonry. On this basis, new stress‐strain curves as a function of temperature are proposed here and then compared with the stress‐strain curves currently included in the Annex D of EN 1996‐1‐2. The comparison results point out that the current curves do not correspond to the effective behaviour of AAC masonry under fire conditions. The proposed curves can be used as base to be implemented in the new version of EN 1996‐1‐2.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号