首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We consider the problem of providing a fixed width confidence interval for the difference of two normal means when the variances are unknown and unequal. We propose a two-stage procedure that differs from those of Chapman (1950) and Ghosll (1975). The procedure provides the desired confidence, subject to the restriction on the width, for certain values of the design parameter h. Values of h are given by the Monte Carlo rnethod for various combinations of first stage sample size and confidence level. Finally, it is shown that the procedure is asymptotically more efficient than those of Chapmail and Ghosh with respect to total sample size, as the width of the interval approaches zero.  相似文献   

2.
Sequential fixed width interval estimation procedure is considered in a simple linear regression model where the independent variable (covariate) assumes only a finite number of values and the dependent variable (response) is randomly right censored. The censoring distribution may depend on the covariate values. The sequential procedure is shown to be consistent and efficient as the width of the confidence interval decreases to zero. The asymptotic distribution of the underlying stopping rale is obtained  相似文献   

3.
Abstract

The problem of asymptotic efficiency of adaptive one-step predictors for a stable multivariate first-order autoregressive process (AR(1)) with unknown parameters is considered. The predictors are based on the truncated estimators of the dynamic matrix parameter. The truncated estimation method is a modification of the truncated sequential estimation method that makes it possible to obtain estimators of ratio-type functionals with a given accuracy by samples of fixed size. The criterion of optimality is based on the loss function, defined as a sum of sample size and squared prediction error's sample mean. The cases of known and unknown variance of the noise model are studied. In the latter case the optimal sample size is a special stopping time. The simulation results are given.  相似文献   

4.
Abstract

In this article we consider the problem of comparing several Bernoulli populations in order to identify the population producing the largest success probability that is also larger than a given standard. We propose a fixed sample size procedure for which we develop the probability of a correct selection and the least favorable configuration. We also propose a curtailed version of the fixed sample size procedure and show that the curtailed procedure reaches the same probability of a correct selection as the fixed sample size procedure, while using fewer observations. We provide tables to implement the procedures and illustrate them via an example. We use simulations to estimate the savings by the curtailment procedure.  相似文献   

5.
Abstract

We consider the standardized difference of normal means as the outcome measure for comparing two independent groups, say experimental E and control C. This scale- and translation-invariant effect measure enables a convenient specification of a noninferiority margin in applications. Starting with a family of null hypotheses, we derive adaptive group sequential confidence intervals keeping the predefined confidence coefficient. The interval at the end of the trial determines whether and which null hypotheses can be rejected. During the course of the trial, the sample size can be calculated in a completely adaptive way based on the unblinded data of previously performed stages. Concrete rules for sample size updating are provided. Moreover, in each interim analysis, it is possible to change the planning from showing noninferiority to showing superiority or vice versa without affecting the overall type I error. A real data example is worked out in detail and the change in the sample size planning from showing noninferiority to showing superiority is considered during the ongoing trial. In Monte Carlo simulations, we investigate the practical properties of the proposed intervals.  相似文献   

6.
Abstract

We consider two-stage estimation for a fixed-span confidence region about a linear function of mean vectors from π i : N p (μ i , Σ i ), i = 1,…,k (≥2), when Σ i 's have some structures. The purpose of this article is to investigate asymptotic efficiency of the estimation up to the second order in terms of the sample size. An adjustment of the design constant and a proper choice of the initial sample size appearing in the two-stage estimation are proposed to have asymptotic second-order efficiency. Some simulations are carried out to see moderate sample size performances of the proposed two-stage estimation. An example is given for a demonstration.  相似文献   

7.
Techniques of Armitage (1958) for finding confidence intervals after sequential tests (SCI) are applied to curtailed binomial test boundaries. The form of the exact randomized SCI is given. We also show that the conservative confidence interval calculated as though a fixed sample size procedure had been used (FCI) remains conservative when used after stopping on a curtailed boundary. Numerical results are obtained to assess the potential gains that may be obtained by using the conservative SCI or exact SCI over the conservative FCI for these boundaries.  相似文献   

8.
《Drying Technology》2013,31(7):1451-1464
Abstract

Drying kinetics of soybean seeds were investigated in the fixed bed (which is normally used) and in the moving bed with cross flow, both being run under thin-layer conditions. The analysis of the available data followed the diffusive model approach with re-parameterization. Special attention has been given to the nonlinearity inherent in the database in order to evaluate the statistical properties of the least squares estimator. The results showed that the effective diffusivity of the moving bed is 24 to 44% higher than that of the fixed bed.  相似文献   

9.
Abstract

Traditional control charts for process monitoring are based on taking samples of fixed size at equally spaced sampling points. As an alternative to these traditional fixed sampling rate (FSR) control charts, variable sampling rate (VSR) control charts change the sampling rate as a function of the data from the process. With VSR control charts the sampling rate is increased whenever there is some indication of a problem with the process, and decreased when there is no indication of a problem. This paper investigates a type of VSR control chart based on sequential sampling in which the sample size used at a sampling point is a function of the data from the current and past sampling points. Sequential sampling is investigated in the context of simultaneously monitoring both the mean and variability of a multivariate normal process. Multivariate exponentially weighted moving average (MEWMA) control charts based on sample means and on the sum of the squared deviations from the target are considered. When sequential sampling is used in a combination of the MEWMA charts based on sample means and on squared deviations from the target, the average performance is much better than that of the standard FSR multivariate control charts that have traditionally been used.  相似文献   

10.
Multi-state models are frequently applied to describe transitions over time between three states: healthy, not healthy and death. The three-state model can be used to estimate life expectancies in health and ill health. In this article, continuous-time Markov models are specified for the transitions between the three states. Transition intensities are regressed on age as a time-dependent covariate. The covariate is handled in a piecewise-constant fashion where the time interval between two consecutive observations is divided into subintervals of fixed and equal lengths. Study design choices such as sample size, length of follow-up, and time intervals between observations are investigated in a simulation study. The effects on parameter estimation are discussed as well as the effects on the estimation of life expectancies. In addition, data taken from the UK Cognitive Functioning and Ageing Study are analysed.  相似文献   

11.
In process identification (i.e., dynamic model development) information on the precision and reliability of a parameter estimate is conveyed by a confidence interval. The best confidence interval is the one with the shortest width for a given level of confidence. Confidence intervals widen as the standard error increases or as the number of estimated parameters increases. When the value of a parameter is needed for physical understanding of process characteristics, its precision and reliability, i.e., certainty, is crucial. Parameter certainty increases as the number of estimated parameters decreases because this causes confidence intervals to shorten and confidence levels to increase. Hence, this article focuses on maximizing parameter certainty of physically interpretable dynamic parameters under block-oriented modeling by obtaining accurate values for all the dynamic parameters from a minimum set of estimated parameters. This objective is accomplished by the development of a procedure that identifies equivalent sets of parameters and estimates one parameter for each set. For a seven (7) input, five (5) output, simulated CSTR, its 84 physically based dynamic parameters were accurately determined from 23 estimated parameters that resulted in an increase in confidence level from 50% to 99.9% for a fixed interval width.  相似文献   

12.
13.
Abstract

In this article, we consider the nonasymptotic sequential estimation of means of random variables bounded in between zero and one. We have rigorously demonstrated that, in order to guarantee a prescribed relative precision and confidence level, it suffices to continue sampling until the sample sum is no less than a certain bound and then take the sample mean as an estimate for the mean of the bounded random variable. We have developed an explicit formula and a bisection search method for the determination of such bound of sample sum, without any knowledge of the bounded variable. Moreover, we have derived bounds for the distribution of sample size. In the special case of Bernoulli random variables, we have established analytical and numerical methods to further reduce the bound of sample sum and thus improve the efficiency of sampling.  相似文献   

14.
A neural network is trained to estimate the unknown crystallinity and temperature of Nylon 6 and its nanocomposites while the material is undergoing cooling at a fixed rate. The innovation of the work is that the full spectrum captured by the laser Raman spectroscope is used to train a neural network for estimation of crystallinity and temperature. The small‐angle light scattering (SALS) and differential scanning calorimetry (DSC) data were used to provide the training examples for the neural network. Results indicate that the neural network can provide reliable estimates of the crystallinity and temperature provided that there is a sufficient number of training data available. Neural network methodology is also efficient in establishing the crystallization–temperature relationship as a function of cooling rate and demonstrates the heterogeneous nucleation effect of nanoclay in the nylon 6 matrix. © 2004 Wiley Periodicals, Inc. J Appl Polym Sci 92: 474–483, 2004  相似文献   

15.
This paper considers the problem of sequential point estimation and fixed accuracy confidence set procedures of autoregressive parameters in a ρ-th order stationary autoregressive model. The sequential estimator proposed here is based on the least squares estimator and is shown to be risk efficient as the cost of estimation error tends to infinity. Furthermore, the proposed procedure for fixed-width confidence set is shown to be both asymptotically consistent and asymptotically efficient as the width approaches zero.  相似文献   

16.
Abstract

In this article, we consider a variety of inference problems for high-dimensional data. The purpose of this article is to suggest directions for future research and possible solutions about p ? n problems by using new types of two-stage estimation methodologies. This is the first attempt to apply sequential analysis to high-dimensional statistical inference ensuring prespecified accuracy. We offer the sample size determination for inference problems by creating new types of multivariate two-stage procedures. To develop theory and methodologies, the most important and basic idea is the asymptotic normality when p → ∞. By developing asymptotic normality when p → ∞, we first give (a) a given-bandwidth confidence region for the square loss. In addition, we give (b) a two-sample test to assure prespecified size and power simultaneously together with (c) an equality-test procedure for two covariance matrices. We also give (d) a two-stage discriminant procedure that controls misclassification rates being no more than a prespecified value. Moreover, we propose (e) a two-stage variable selection procedure that provides screening of variables in the first stage and selects a significant set of associated variables from among a set of candidate variables in the second stage. Following the variable selection procedure, we consider (f) variable selection for high-dimensional regression to compare favorably with the lasso in terms of the assurance of accuracy and the computational cost. Further, we consider variable selection for classification and propose (g) a two-stage discriminant procedure after screening some variables. Finally, we consider (h) pathway analysis for high-dimensional data by constructing a multiple test of correlation coefficients.  相似文献   

17.
Let f be a probability density function on RP. we consider the problem of sequential estimation of f at a given point x0. For a class of recursive kernel estimators and a class of stopping rules based on the idea of fixed–width interval estimation we investigate the asymptotic behaviour of the moments of the stopping rules when the length of the interval tends to zero.  相似文献   

18.
This article is concerned with confidence interval construction for functionals of the survival distribution for censored dependent data. We adopt the recently developed self‐normalization approach (Shao, 2010), which does not involve consistent estimation of the asymptotic variance, as implicitly used in the blockwise empirical likelihood approach of El Ghouch et al. (2011). We also provide a rigorous asymptotic theory to derive the limiting distribution of the self‐normalized quantity for a wide range of parameters. Additionally, finite‐sample properties of the self‐normalization‐based intervals are carefully examined, and a comparison with the empirical likelihood‐based counterparts is made.  相似文献   

19.
We consider fixed-size estimation for a linear function of mean vectors from π i : N p (μ i , Σ i ), i = 1,…, k. The goal of inference is to construct a fixed-span confidence region with required accuracy. We find a new sample-size formulation and propose a two-stage estimation methodology to give the fixed-span confidence region satisfying the probability requirement approximately. We show that the proposed methodology greatly reduces the sample size to enjoy the asymptotic first-order consistency when the dimensionality p is extremely high. With the help of simulation studies, we show that the proposed methodology is still efficient even when p is moderate. We give an actual example to illustrate how it should be done by using the proposed methodology in the inference.  相似文献   

20.
Abstract

In this article, we investigate the problem of approximating the fixed point for some function using a Mann iterative process with random errors. After establishing some exponential inequalities, we prove the complete convergence of Mann’s algorithm toward the fixed point and deduce a confidence interval for this one. In addition, we establish the convergence rate of Mann’s algorithm. Several numerical examples are sketched to illustrate the performance of the proposed algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号