首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the newly proposed smooth approach. The generalisability of the sample PRCs can be judged using confidence bands. The quality of various bootstrap strategies to estimate such confidence bands for PRCs is evaluated. The best coverage was obtained with BCa intervals using a non-parametric bootstrap. The coverage appeared to be generally good, except for the case of exactly zero population PRCs for all conditions. Then, the behaviour is irregular, which is caused by the sign indeterminacy of the PRCs. The insights obtained into the optimal bootstrap strategy are useful to apply in the PRC model, and more generally for estimating confidence intervals in singular value decomposition based methods.  相似文献   

2.
The statistical models and methods for lifetime data mainly deal with continuous nonnegative lifetime distributions. However, discrete lifetimes arise in various common situations where either the clock time is not the best scale for measuring lifetime or the lifetime is measured discretely. In most settings involving lifetime data, the population under study is not homogenous. Mixture models, in particular mixtures of discrete distributions, provide a natural answer to this problem. Nonparametric mixtures of power series distributions are considered, as for instance nonparametric mixtures of Poisson laws or nonparametric mixtures of geometric laws. The mixing distribution is estimated by nonparametric maximum likelihood (NPML). Next, the NPML estimator is used to build estimates and confidence intervals for the hazard rate function of the discrete lifetime distribution. To improve the performance of the confidence intervals, a bootstrap procedure is considered where the estimated mixture is used for resampling. Various bootstrap confidence intervals are investigated and compared to the confidence intervals obtained directly from the NPML estimates.  相似文献   

3.
A calculator program has been written to give confidence intervals on branching ratios for rare decay modes (or similar quantities) calculated from the number of events observed, the acceptance factor, the background estimate and the associated errors. Results from different experiments (or different channels from the same experiment) can be combined. The calculator is available in http://www.slac.stanford.edu/~barlow/limits.html.  相似文献   

4.
The calculation of interval forecasts for highly persistent autoregressive (AR) time series based on the bootstrap is considered. Three methods are considered for countering the small-sample bias of least-squares estimation for processes which have roots close to the unit circle: a bootstrap bias-corrected OLS estimator; the use of the Roy-Fuller estimator in place of OLS; and the use of the Andrews-Chen estimator in place of OLS. All three methods of bias correction yield superior results to the bootstrap in the absence of bias correction. Of the three correction methods, the bootstrap prediction intervals based on the Roy-Fuller estimator are generally superior to the other two. The small-sample performance of bootstrap prediction intervals based on the Roy-Fuller estimator are investigated when the order of the AR model is unknown, and has to be determined using an information criterion.  相似文献   

5.
The application of fuzzy sets theory to statistical confidence intervals for unknown fuzzy parameters is proposed in this paper by considering fuzzy random variables. In order to obtain the belief degrees under the sense of fuzzy sets theory, we transform the original problem into the optimization problems. We provide the computational procedure to solve the optimization problems. A numerical example is also provided to illustrate the possible application of fuzzy sets theory to statistical confidence intervals.  相似文献   

6.
In Balabdaoui, Rufibach, and Wellner (2009), pointwise asymptotic theory was developed for the nonparametric maximum likelihood estimator of a log-concave density. Here, the practical aspects of their results are explored. Namely, the theory is used to develop pointwise confidence intervals for the true log-concave density. To do this, the quantiles of the limiting process are estimated and various ways of estimating the nuisance parameter appearing in the limit are studied. The finite sample size behavior of these estimated confidence intervals is then studied via a simulation study of the empirical coverage probabilities.  相似文献   

7.
8.
This paper introduces the confidence interval estimate for measuring the bullwhip effect, which has been observed across most industries. Calculating a confidence interval usually needs the assumption about the underlying distribution. Bootstrapping is a non-parametric, but computer intensive, estimation method. In this paper, a simulation study on the behavior of the 95% bootstrap confidence interval for estimating bullwhip effect is made. Effects of sample size, autocorrelation coefficient of customer demand, lead time, and bootstrap methods on the 95% bootstrap confidence interval of bullwhip effect are presented and discussed.  相似文献   

9.
Conditional confidence intervals for classification error rate   总被引:1,自引:1,他引:0  
An observation is to be classified into one of several multivariate normal populations with equal covariance matrix. When the parameters are unknown, independent training samples are taken from the populations. We consider the construction of confidence intervals for the conditional error rate. The cases of two populations and three populations are studied in detail. We propose the conditional jackknife confidence interval and the conditional bootstrap confidence intervals of the conditional error rate. A Monte Carlo study is conducted to compare the confidence intervals.  相似文献   

10.
Characterizing neural spiking activity as a function of intrinsic and extrinsic factors is important in neuroscience. Point process models are valuable for capturing such information; however, the process of fully applying these models is not always obvious. A complete model application has four broad steps: specification of the model, estimation of model parameters given observed data, verification of the model using goodness of fit, and characterization of the model using confidence bounds. Of these steps, only the first three have been applied widely in the literature, suggesting the need to dedicate a discussion to how the time-rescaling theorem, in combination with parametric bootstrap sampling, can be generally used to compute confidence bounds of point process models. In our first example, we use a generalized linear model of spiking propensity to demonstrate that confidence bounds derived from bootstrap simulations are consistent with those computed from closed-form analytic solutions. In our second example, we consider an adaptive point process model of hippocampal place field plasticity for which no analytical confidence bounds can be derived. We demonstrate how to simulate bootstrap samples from adaptive point process models, how to use these samples to generate confidence bounds, and how to statistically test the hypothesis that neural representations at two time points are significantly different. These examples have been designed as useful guides for performing scientific inference based on point process models.  相似文献   

11.
We propose a statistical procedure for estimating the asymptotic variances and covariances of sample autocorrelations from a stationary time series so that confidence regions and tests on a finite subset of autocorrelations can be implemented. The corresponding algorithm is described. The accuracy of the asymptotic confidence intervals for finite samples is studied by Monte Carlo simulations. Further, our method is illustrated with examples from the literature.  相似文献   

12.
A simplified model is proposed for the statistical properties of frequency responses collected via a spectrum analyser in the presence of corrupting noise, the model is validated by applying statistical tests to experimental data. Coherency function estimates are used together with a weighted non-linear least-squares optimization to obtain approximate confidence intervals for transfer function parameters. These are then compared with the parameter values obtained under noise-free conditions.  相似文献   

13.
Some properties of the exact (nonasymptotic) confidence intervals of the unknown probabilities were studied. Consideration was given to the Bernoulli scheme-based experiments and more general experiments including indistinguishable outcomes.  相似文献   

14.
It is well known that when the data may contain outliers or other departures from the assumed model, classical inference methods can be seriously affected and yield confidence levels much lower than the nominal ones. This paper proposes robust confidence intervals and tests for the parameters of the simple linear regression model that maintain their coverage and significance level, respectively, over whole contamination neighbourhoods. This approach can be used with any consistent regression estimator for which maximum bias curves are tabulated, and thus it is more widely applicable than previous proposals in the literature. Although the results regarding the coverage level of these confidence intervals are asymptotic in nature, simulation studies suggest that these robust inference procedures work well for small samples, and compare very favourably with earlier proposals in the literature.  相似文献   

15.
The paper proposes a practical procedure for obtaining a confidence interval (CI) for the parameter π of the Bernoulli distribution. Let x be the observed number of successes of a random sample of size n from this distribution. The procedure is as follows: use Table 1 to determine whether the given pair (n,x) is a small or a large sample pair. If the small sample situation applies then use Table 2 which gives the Sterne–Crow CI. Otherwise, use the Anscombe CI for which practical formulas are given.  相似文献   

16.
A new approach to construct the two-sided and one-sided fuzzy confidence intervals for the fuzzy parameter is introduced, based on normal fuzzy random variables. Fuzzy data, that are observations of normal fuzzy random variables, are used in constructing such fuzzy confidence intervals. We invoke usual methods of finding confidence intervals for parameters obtained form h-level sets of fuzzy parameter to construct fuzzy confidence intervals. The crisp data that are used in constructing these confidence intervals come form h-level sets of fuzzy observations. Combining such confidence intervals yields a fuzzy set of the class of all fuzzy parameters, which is called the fuzzy confidence interval.Then, a criterion is proposed to determine the degree of membership of every fuzzy parameter in the introduced fuzzy confidence interval. A numerical example is provided to clarify the proposed method. Finally, the advantages of the proposed method with respect to some common methods are discussed.  相似文献   

17.
The issues of accuracy and reliability of measuring physical quantities and functions are studied. Estimation formulas for measured quantities are expressed as the additive sum of a useful signal and a noise disturbance. And finally, certain estimations of confidence intervals for measured quantities and the moments of stationary and non-stationary stochastic functions are derived.  相似文献   

18.
Log periodogram regression is widely applied in empirical applications to estimate the memory parameter, d, of long memory time series. This estimator is consistent for d<1 and pivotal asymptotically normal for d<3/4. However, the asymptotic distribution is a poor approximation of the (unknown) finite sample distribution if the sample size is small. Finite sample improvements in the construction of confidence intervals can be achieved by different nonparametric bootstrap procedures based on the residuals of log periodogram regression. In addition to the basic residual bootstrap, the local and block bootstraps seem adequate for replicating the structure that may arise in the errors of the regression when the series shows weak dependence in addition to long memory. The performances of different bias correcting bootstrap techniques and a bias reduced log periodogram regression are also analyzed with a view to adjusting the bias caused by that structure. Finally, an application to the Nelson and Plosser US macroeconomic data is included.  相似文献   

19.
20.
This paper discusses the use of supervised neural networks as a metamodeling technique for discrete-event, stochastic simulation. An (s, S) inventory simulation from the literature is translated into a metamodel through development of parallel neural networks, one estimating expected total cost and one estimating variance of expected total cost. These neural network estimates are used to form confidence intervals, which are compared for coverage to those formed directly by simulation. It is shown that the neural network metamodel is quite competitive in accuracy when compared to the simulation itself and, once trained, can operate in nearly real-time. A comparison of metamodel performance under interpolative versus extrapolative predictions is made.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号