首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
研究了有辅助指标的整群抽样,构造了总体总值的回归估计量,证明了回归估计量是可用的,并且不劣于整群抽样下的简单估计量及整群抽样下的比估计量。  相似文献   

2.
In many experimental situations, a response surface design is divided into several blocks to control an extraneous source of variation. The traditional approach in most response surface applications is to treat the block effect as fixed in the assumed model. There are, however, situations in which it is more appropriate to consider the block effect as random. This article is concerned with inference about a response surface model in the presence of a random block effect. Since this model also contains fixed polynomial effects, it is considered to be a mixed-effects model. The main emphasis of the proposed analysis is on estimation and testing of the fixed effects. A two-stage mixed-model procedure is developed for this purpose. The variance components due to the random block effect and the experimental error are first estimated and then used to obtain the generalized least squares estimator of the fixed effects. This procedure produces the so-called Yates combined intra- and inter-block estimator. By contrast, the Yates intra-block estimator is the one obtained when the block effect is treated as fixed. In particular, if the response surface design blocks orthogonally, then the two estimators are shown to be identical. An experiment on bonding galvanized steel bars is used to motivate the problem and illustrate the results.  相似文献   

3.
The paper presents a model that extends the stochastic finite element method to the modelling of transitional energetic–statistical size effect in unnotched quasibrittle structures of positive geometry (i.e. failing at the start of macro‐crack growth), and to the low probability tail of structural strength distribution, important for safe design. For small structures, the model captures the energetic (deterministic) part of size effect and, for large structures, it converges to Weibull statistical size effect required by the weakest‐link model of extreme value statistics. Prediction of the tail of extremely low probability such as one in a million, which needs to be known for safe design, is made feasible by the fact that the form of the cumulative distribution function (cdf) of a quasibrittle structure of any size has been established analytically in previous work. Thus, it is not necessary to turn to sophisticated methods such as importance sampling and it suffices to calibrate only the mean and variance of this cdf. Two kinds of stratified sampling of strength in a finite element code are studied. One is the Latin hypercube sampling of the strength of each element considered as an independent random variable, and the other is the Latin square design in which the strength of each element is sampled from one overall cdf of random material strength. The former is found to give a closer estimate of variance, while the latter gives a cdf with smaller scatter and a better mean for the same number of simulations. For large structures, the number of simulations required to obtain the mean size effect is greatly reduced by adopting the previously proposed method of random property blocks. Each block is assumed to have a homogeneous random material strength, the mean and variance of which are scaled down according to the block size using the weakest‐link model for a finite number of links. To check whether the theoretical cdf is followed at least up to tail beginning at the failure probability of about 0.01, a hybrid of stratified sampling and Monte Carlo simulations in the lowest probability stratum is used. With the present method, the probability distribution of strength of quasibrittle structures of positive geometry can be easily estimated for any structure size. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

4.
Memory-type auxiliary-information-based (AIB) control charts are very effective in detecting small-to-moderate shifts in the process mean. In this study, we first develop a unique uniformly minimum variance unbiased estimator of the process mean that requires information on the study variable as well as on several correlated auxiliary variables. Then, based on this estimator, adaptive and nonadaptive CUSUM and EWMA charts are developed with either fixed or variable sampling interval for monitoring the process mean, namely, the multiple AIB (MAIB) charts. The proposed charts encompass existing charts with or without the auxiliary information. The run length characteristics of the proposed charts are computed with the Monte Carlo simulations when sampling from a multivariate normal distribution. Based on the run length comparisons, it is found that the MAIB charts are uniformly and substantially more sensitive than the AIB charts when monitoring the process mean. Real datasets are also considered to explain the implementation of the MAIB charts.  相似文献   

5.
In reliability analysis, the stress-strength model is often used to describe the life of a component which has a random strength (X) and is subjected to a random stress (Y). In this paper, we considered the problem of estimating the reliability R=P [Y<X] when the distributions of both stress and strength are independent and follow exponentiated Pareto distribution. The maximum likelihood estimator of the stress strength reliability is calculated under simple random sample, ranked set sampling and median ranked set sampling methods. Four different reliability estimators under median ranked set sampling are derived. Two estimators are obtained when both strength and stress have an odd or an even set size. The two other estimators are obtained when the strength has an odd size and the stress has an even set size and vice versa. The performances of the suggested estimators are compared with their competitors under simple random sample via a simulation study. The simulation study revealed that the stress strength reliability estimates based on ranked set sampling and median ranked set sampling are more efficient than their competitors via simple random sample. In general, the stress strength reliability estimates based on median ranked set sampling are smaller than the corresponding estimates under ranked set sampling and simple random sample methods.  相似文献   

6.
The estimation of the finite population distribution function under several sampling strategies based on a PPS cluster sampling, i.e., with cluster selection probabilities proportional to size, is studied. For the estimation of population means and totals, it is well-known that this type of strategies gives good results if the cluster selection probabilities are proportional to the total of the variable under study or to a related auxiliary variable over the cluster. It is proved that, for the estimation of the distribution function using cluster sampling, this solution is not good in general and, under an appropriate criteria, the optimal cluster selection probabilities that minimize the variance of the estimation, is obtained. This methodology is applied to two classical PPS sampling strategies: sampling with replacement, with the Hansen-Hurwitz estimator, and random groups sampling with the Rao-Hartley-Cochran estimator. Finally a small simulation to compare the efficiency of this approach with other methods is presented.  相似文献   

7.
    
《TEST》1991,6(2):67-85
The paper studies the problem of selecting an estimator with (approximately) minimal asymptotic variance. For every fixed contamination level there is usually just one such estimator in the considered family. Using the first and the second derivative of the asymptotic variance with respect to the parameter which parametrizes the family of estimators the paper gives two examples how to select the estimator and gives an approximation to a loss which we suffer when we use the estimator with approximately minimal asymptotic variance instead of the estimator with the precisely minimal.   相似文献   

8.
Two types of sampling plans are examined as alternatives to simple random sampling in Monte Carlo studies. These plans are shown to be improvements over simple random sampling with respect to variance for a class of estimators which includes the sample mean and the empirical distribution function.  相似文献   

9.
Mixed sampling plans are two-stage sampling plans in which variable and attribute quality characteristics are used in deciding the acceptance or rejection of the lot. Due to modern quality control systems, mixed sampling plans are widely applied in various stages of production. Use of different sampling plans for different quality characteristics would result in loss in economy, time, and labor. Therefore an attempt has been made to design multidimensional mixed sampling plans (MDMSP). Based on multidimensional quality characteristics, a MDMSP aims at controlling overall quality of a lot or process. The design aspect of a MDMSP is given in detail based on the Poisson model (type B process) in the second stage. Tables and illustration are also provided. Suresh and Devaarul (2000) have designed mixed sampling plans with chain sampling as the attribute plan. Suresh and Devaarul (2003) have developed mixed sampling plans for maximum allowable variance. Suresh and Devaarul (2003) have combined process control and product control to reduce sampling costs. Schilling (1967) has given a method for determining the operating characteristics of a mixed sampling plan including several other measures of the plan. A multidimensional mixed sampling scheme consists of two stages in which several variable and attribute quality characteristics are considered in deciding the acceptance or rejection of the lot. The main advantage of a MDMSP over any other plan is the reduction in the sample size for the same amount of protection.  相似文献   

10.
The CUmulative SUM (CUSUM) charts have sensitive nature against small and moderate shifts that occur in the process parameter(s). In this article, we propose the CUSUM and combined Shewhart-CUSUM charts for monitoring the process mean using the best linear unbiased estimator of the location parameter based on ordered double-ranked set sampling (RSS) scheme, where the CUSUM chart refers to the Crosier's CUSUM chart. The run-length characteristics of the proposed CUSUM charts are computed with the Monte Carlo simulations. The run-length profiles of the proposed CUSUM charts are compared with those of the CUSUM charts based on simple random sampling, RSS, and ordered RSS schemes. It is found that the proposed CUSUM charts uniformly outperform their existing counterparts when detecting all different kinds of shifts in the process mean. A real data set is also considered to explain the implementation of the proposed CUSUM charts.  相似文献   

11.
结构体系可靠度分析中的最小方差抽样   总被引:2,自引:0,他引:2  
本文研究了并联结构体系可靠度分析中以失效概率估计值方差最小为条件的重要抽样问题。由于直接对抽样样本平均值和标准差进行优选比较困难,因而本文首先建立了三个定理,据此并通过数例分析给出正态抽样样本平均值和标准差的建议取值范围。分析和实际模拟表明,在以正态分布进行抽样的前提下,合理选取抽样样本平均值和标准差,可使抽样效率进一步提高。  相似文献   

12.
A number of samples of fixed size are drawn from a finite population, and all defectives found are replaced by nondefectives. Two possible sampling schemes are considered: (1) all samples (with defectives replaced) are returned to the population prior to drawing the next sample and (2) sampling is performed only on those units not yet examined. In each scheme there is a (possibly) nonzero probability of misclassifying a defective as a nondefective. Exact solutions are obtained for the probability of detecting and replacing a given number of defectives, and the mean and variance are derived for use in the normal approximation to these probabilities.  相似文献   

13.
For the optimization under uncertainty problem, there has been recent interest in coupling trust-region methods with surrogate surfaces or function approximations. There are many theoretical and statistical issues that must be carefully considered in following such an approach. Herein, the Nadaraya-Watson estimator is used for the smooth function approximation, and the effects of observation noise and random sampling on estimator error are examined. For the fundamental optimization problem where the exact function is quadratic, analytical results are derived for the mean-square error of the difference and gradient of the function. It is also shown how these statistics are related to the trust-region method, how the analytical results can be used to determine the bandwidth of the kernel of the estimator, and how third-order terms can affect the error statistics.  相似文献   

14.
Host cardinality estimation is an important research field in network management and network security. The host cardinality estimation algorithm based on the linear estimator array is a common method. Existing algorithms do not take memory footprint into account when selecting the number of estimators used by each host. This paper analyzes the relationship between memory occupancy and estimation accuracy and compares the effects of different parameters on algorithm accuracy. The cardinality estimating algorithm is a kind of random algorithm, and there is a deviation between the estimated results and the actual cardinalities. The deviation is affected by some systematical factors, such as the random parameters inherent in linear estimator and the random functions used to map a host to different linear estimators. These random factors cannot be reduced by merging multiple estimators, and existing algorithms cannot remove the deviation caused by such factors. In this paper, we regard the estimation deviation as a random variable and proposed a sampling method, recorded as the linear estimator array step sampling algorithm (L2S), to reduce the influence of the random deviation. L2S improves the accuracy of the estimated cardinalities by evaluating and remove the expected value of random deviation. The cardinality estimation algorithm based on the estimator array is a computationally intensive algorithm, which takes a lot of time when processing high-speed network data in a serial environment. To solve this problem, a method is proposed to port the cardinality estimating algorithm based on the estimator array to the Graphics Processing Unit (GPU). Experiments on real-world highspeed network traffic show that L2S can reduce the absolute bias by more than 22% on average, and the extra time is less than 61 milliseconds on average.  相似文献   

15.
We consider estimation of the precision of a measuring instrument without the benefit of replicate observations on heterogeneous sampling units. Grubbs (1948) proposed an estimator which involves the use of a second measuring instrument, resulting in a pair of observations on each sampling unit. Since the precisions of the two measuring instruments are generally different, these observations cannot be treated as replicates. Very large sample sizes are often required if the standard error of the estimate is to be within reasonable bounds and if negative precision estimates are to be avoided. We show that the two instrument Grubbs estimator can be improved considerably if fairly reliable preliminary information regarding the ratio of sampling unit variance to instrument variance is available.Our results are presented in the context of the evaluation of on-line analyzers. A data set from an analyzer evaluation is used to illustrate the methodology.  相似文献   

16.
It is assumed that there are available k finite populations, each consisting of U primary units, and that each primary unit can be subdivided into T elements. It is further assumed that the populations have a common known variance among primary units, and a common known variance among elements within primary units. The values of the overall population means per element are assumed to be unknown, as is the true pairing of the ranked values of these means with the populations.

It is desired to select the population which has the largest overall population mean per element. This selection is to be accomplished by taking a random sample of uU primary units from each population, and then a random sample of tT elements from each primary unit. The pair (t,u) is to be chosen in such a way as to guarantee that the probability of a correct selection will be equal to or greater than a specified quantity whenever the true difference between the largest and second largest overall population mean per element is equal to or greater than a second specified quantity.

In general many pairs (t,u) will accomplish the stated objective. It is proposed that a choice be made among these pairs using the criterion of minimum total cost of sampling. This formulation leads to an integer programming problem with a nonlinear constraint. An especially simple method of solving this problem is proposed, and this method is contrasted with another method which has been considered in the literature.

It is shown how the subsampling ranking procedure described in this paper can be applied to a bulk sampling problem involving the clean content of wool in bales.  相似文献   

17.
In this article, a modified continuous sampling of type II is provided for finite production runs. The suggested sampling plan revises the continuous sampling plan‐2 of Yang (1983). The proposed plan places no predetermined limit on the number of items to be inspected until the second defect is detected when in partial inspection mode. A similar derivation to that of Yang is used to find an approximation to the average outgoing quality of the modified continuous sampling plan‐2 in finite production runs. Some tables are provided to aid in the selection of the clearance number and sampling fraction when the production run length and an average outgoing quality limit are given. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

18.
Variance is one of the most vital measures of dispersion widely employed in practical aspects. A commonly used approach for variance estimation is the traditional method of moments that is strongly influenced by the presence of extreme values, and thus its results cannot be relied on. Finding momentum from Koyuncu’s recent work, the present paper focuses first on proposing two classes of variance estimators based on linear moments (L-moments), and then employing them with auxiliary data under double stratified sampling to introduce a new class of calibration variance estimators using important properties of L-moments (L-location, L-cv, L-variance). Three populations are taken into account to assess the efficiency of the new estimators. The first and second populations are concerned with artificial data, and the third populations is concerned with real data. The percentage relative efficiency of the proposed estimators over existing ones is evaluated. In the presence of extreme values, our findings depict the superiority and high efficiency of the proposed classes over traditional classes. Hence, when auxiliary data is available along with extreme values, the proposed classes of estimators may be implemented in an extensive variety of sampling surveys.  相似文献   

19.
20.
The two-sample variance, the frequency stability measure in the time domain, is defined in terms of the infinite time average. The estimate of the variance obtained from a finite data set must be accompanied by a confidence interval. Theoretical equations are derived for the variance or degrees of freedom in the chi-square distribution for the continuous sampling method to make more efficient use of a finite set of sampled time data. The results are plotted for degrees of freedom and show that there is considerable improvement in the phase modulation (PM) noise compared with the results for τ-overlap sampling, because of an increased number of τ-averaged frequency samples to be obtained from the time data. For white, flicker, and random-walk frequency modulation (FM) noise the improvements converge to about 100, 30, and 4%, respectively. The reasonableness of the assumption of stationarity of the random process is discussed  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号