首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The performance of model based bootstrap methods for constructing point-wise confidence intervals around the survival function with interval censored data is investigated. It is shown that bootstrapping from the nonparametric maximum likelihood estimator of the survival function is inconsistent for the current status model. A model based smoothed bootstrap procedure is proposed and proved to be consistent. In fact, a general framework for proving the consistency of any model based bootstrap scheme in the current status model is established. In addition, simulation studies are conducted to illustrate the (in)-consistency of different bootstrap methods in mixed case interval censoring. The conclusions in the interval censoring model would extend more generally to estimators in regression models that exhibit non-standard rates of convergence.  相似文献   

2.
In this paper we consider the beta regression model recently proposed by Ferrari and Cribari-Neto [2004. Beta regression for modeling rates and proportions. J. Appl. Statist. 31, 799-815], which is tailored to situations where the response is restricted to the standard unit interval and the regression structure involves regressors and unknown parameters. We derive the second order biases of the maximum likelihood estimators and use them to define bias-adjusted estimators. As an alternative to the two analytically bias-corrected estimators discussed, we consider a bias correction mechanism based on the parametric bootstrap. The numerical evidence favors the bootstrap-based estimator and also one of the analytically corrected estimators. Several different strategies for interval estimation are also proposed. We present an empirical application.  相似文献   

3.
We develop nearly unbiased estimators for the two-parameter Birnbaum-Saunders distribution [Birnbaum, Z.W., Saunders, S.C., 1969a. A new family of life distributions. J. Appl. Probab. 6, 319-327], which is commonly used in reliability studies. We derive modified maximum likelihood estimators that are bias-free to second order. We also consider bootstrap-based bias correction. The numerical evidence we present favors three bias-adjusted estimators. Different interval estimation strategies are evaluated. Additionally, we derive a Bartlett correction that improves the finite-sample performance of the likelihood ratio test in finite samples.  相似文献   

4.
Feedforward neural networks, particularly multilayer perceptrons, are widely used in regression and classification tasks. A reliable and practical measure of prediction confidence is essential. In this work three alternative approaches to prediction confidence estimation are presented and compared. The three methods are the maximum likelihood, approximate Bayesian, and the bootstrap technique. We consider prediction uncertainty owing to both data noise and model parameter misspecification. The methods are tested on a number of controlled artificial problems and a real, industrial regression application, the prediction of paper "curl". Confidence estimation performance is assessed by calculating the mean and standard deviation of the prediction interval coverage probability. We show that treating data noise variance as a function of the inputs is appropriate for the curl prediction task. Moreover, we show that the mean coverage probability can only gauge confidence estimation performance as an average over the input space, i.e., global performance and that the standard deviation of the coverage is unreliable as a measure of local performance. The approximate Bayesian approach is found to perform better in terms of global performance.  相似文献   

5.
In this paper, an improved method of model complexity selection for nonnative speech recognition is proposed by using maximum a posteriori (MAP) estimation of bias distributions. An algorithm is described for estimating hyper-parameters of the priors of the bias distributions, and an automatic accent classification algorithm is also proposed for integration with dynamic model selection and adaptation. Experiments were performed on the WSJ1 task with American English speech, British accented speech, and mandarin Chinese accented speech. Results show that the use of prior knowledge of accents enabled more reliable estimation of bias distributions with very small amounts of adaptation speech, or without adaptation speech. Recognition results show that the new approach is superior to the previous maximum expected likelihood (MEL) method, especially when adaptation data are very limited.  相似文献   

6.
B. David  G. Bastin 《Automatica》2002,38(1):81-90
The Gohberg-Heinig explicit formula for the inversion of a block-Toeplitz matrix is used to build an estimator of the inverse of the covariance matrix of a multivariable autoregressive process. This estimator is then conveniently applied to maximum likelihood parameter estimation in nonlinear dynamical systems with output measurements corrupted by additive auto and crosscorrelated noise. An appealing computational simplification is obtained due to the particular form taken by the Gohberg-Heinig formula. The efficiency of the obtained estimation scheme is illustrated via Monte-Carlo simulations and compared with an alternative that is obtained by extending a classical technique of linear system identification to the framework of this paper. These simulations show that the proposed method improves significantly the statistical properties of the estimator in comparison with classical methods. Finally, the ability of the method to provide, in a straightforward way, an accurate confidence region around the estimated parameters is also illustrated.  相似文献   

7.
A bootstrap-based methodology is developed for parameter estimation and polyspectral density estimation in the case of the approximating model of the underlying stochastic process being non-minimum phase autoregressive-moving-average (ARMA) type, given a finite realisation of a single time series data. The method is based on a minimum phase/maximum phase decomposition of the system function together with a time reversal step for the parameter and polyspectral confidence interval estimation. Simulation examples are provided to illustrate the proposed method.  相似文献   

8.
Vocal tract length normalization (VTLN) for standard filterbank-based Mel frequency cepstral coefficient (MFCC) features is usually implemented by warping the center frequencies of the Mel filterbank, and the warping factor is estimated using the maximum likelihood score (MLS) criterion. A linear transform (LT) equivalent for frequency warping (FW) would enable more efficient MLS estimation. We recently proposed a novel LT to perform FW for VTLN and model adaptation with standard MFCC features. In this paper, we present the mathematical derivation of the LT and give a compact formula to calculate it for any FW function. We also show that our LT is closely related to different LTs previously proposed for FW with cepstral features, and these LTs for FW are all shown to be numerically almost identical for the sine-log all-pass transform (SLAPT) warping functions. Our formula for the transformation matrix is, however, computationally simpler and, unlike other previous LT approaches to VTLN with MFCC features, no modification of the standard MFCC feature extraction scheme is required. In VTLN and speaker adaptive modeling (SAM) experiments with the DARPA resource management (RM1) database, the performance of the new LT was comparable to that of regular VTLN implemented by warping the Mel filterbank, when the MLS criterion was used for FW estimation. This demonstrates that the approximations involved do not lead to any performance degradation. Performance comparable to front end VTLN was also obtained with LT adaptation of HMM means in the back end, combined with mean bias and variance adaptation according to the maximum likelihood linear regression (MLLR) framework. The FW methods performed significantly better than standard MLLR for very limited adaptation data (1 utterance), and were equally effective with unsupervised parameter estimation. We also performed speaker adaptive training (SAT) with feature space LT denoted CLTFW. Global CLTFW SAT gave results comparable to SAM and VTLN. By estimating multiple CLTFW transforms using a regression tree, and including an additive bias, we obtained significantly improved results compared to VTLN, with increasing adaptation data.  相似文献   

9.
Several valuable data sources, including the census and National Longitudinal Survey of Youth, include data measured using interval responses. Many empirical studies attempt estimation by assuming the data correspond to the interval midpoints and then use OLS or maximum likelihood assuming normality. Stata performs maximum likelihood estimates (MLE) under the assumption of normality, allowing for intra-group variation. In the presence of heteroskedasticity or distributional misspecification, these estimates are inconsistent. In this paper we focus on an estimation procedure that helps prevent distributional misspecification for interval censored data. We explore the application of partially adaptive estimation, which builds on the MLE framework with families of flexible parametric probability density functions which include the normal as a limiting case. These methods are used to estimate determinants associated with household expenditures based on US Census data. Monte Carlo Simulations are performed to compare the relative efficiency of the different methods of estimation. We find that the flexible nature of our proposed partially adaptive estimation technique significantly reduces estimator bias and improves efficiency in the presence of distributional misspecification.  相似文献   

10.
Performance variability,stemming from nondeterministic hardware and software behaviors or deterministic behaviors such as measurement bias,is a well-known phenomenon of computer systems which increases the difficulty of comparing computer performance metrics and is slated to become even more of a concern as interest in Big Data analytic increases.Conventional methods use various measures(such as geometric mean)to quantify the performance of different benchmarks to compare computers without considering this variability which may lead to wrong conclusions.In this paper,we propose three resampling methods for performance evaluation and comparison:a randomization test for a general performance comparison between two computers,bootstrapping confidence estimation,and an empirical distribution and five-number-summary for performance evaluation.The results show that for both PARSEC and highvariance BigDataBench benchmarks 1)the randomization test substantially improves our chance to identify the difference between performance comparisons when the difference is not large;2)bootstrapping confidence estimation provides an accurate confidence interval for the performance comparison measure(e.g.,ratio of geometric means);and 3)when the difference is very small,a single test is often not enough to reveal the nature of the computer performance due to the variability of computer systems.We further propose using empirical distribution to evaluate computer performance and a five-number-summary to summarize computer performance.We use published SPEC 2006 results to investigate the sources of performance variation by predicting performance and relative variation for 8,236 machines.We achieve a correlation of predicted performances of 0.992 and a correlation of predicted and measured relative variation of 0.5.Finally,we propose the utilization of a novel biplotting technique to visualize the effectiveness of benchmarks and cluster machines by behavior.We illustrate the results and conclusion through detailed Monte Carlo simulation studies and real examples.  相似文献   

11.
Survival analysis is widely applied to develop injury risk curves from biomechanical data. To obtain more accurate estimation of confidence intervals of parameters, bootstrap method was evaluated by a designed simulation process. Four censoring schemes and various sample sizes were considered to investigate failure time parameters corresponding to low-level injury probabilities. In the numerical simulations, the confidence interval ranges developed by bootstrapping were about two-third of the corresponding ranges calculated by asymptotical normal approximation and showed highest reduction for censored datasets with smaller sample size (≤ 40). In analysis of two experimental datasets with reduced sample sizes and mixed censored data, it was shown that the bootstrapping reduce significantly the confidence intervals as well. The results presented in this study recommend using bootstrapping in development of more accurate confidence intervals for risk curves in injury biomechanics, which consequently will lead to better regulations and safer vehicle designs.  相似文献   

12.
动态Ad hoc网络环境下组播源认证研究*   总被引:1,自引:0,他引:1  
就Ad hoc网络环境下基于消息认证码的源认证技术进行了研究和分析,针对TESLA源认证方案给出了一个新的源认证引导方案,并采用间接引导方式来适应Ad hoc网络。实验数据表明,新的引导方案可以在较大程度上减轻系统的负担,从而提高认证的效率。  相似文献   

13.
Statistical models for spatio-temporal data are increasingly used in environmetrics, climate change, epidemiology, remote sensing and dynamical risk mapping. Due to the complexity of the relationships among the involved variables and dimensionality of the parameter set to be estimated, techniques for model definition and estimation which can be worked out stepwise are welcome. In this context, hierarchical models are a suitable solution since they make it possible to define the joint dynamics and the full likelihood starting from simpler conditional submodels. Moreover, for a large class of hierarchical models, the maximum likelihood estimation procedure can be simplified using the Expectation–Maximization (EM) algorithm.In this paper, we define the EM algorithm for a rather general three-stage spatio-temporal hierarchical model, which includes also spatio-temporal covariates. In particular, we show that most of the parameters are updated using closed forms and this guarantees stability of the algorithm unlike the classical optimization techniques of the Newton–Raphson type for maximizing the full likelihood function. Moreover, we illustrate how the EM algorithm can be combined with a spatio-temporal parametric bootstrap for evaluating the parameter accuracy through standard errors and non-Gaussian confidence intervals.To do this a new software library in form of a standard R package has been developed. Moreover, realistic simulations on a distributed computing environment allow us to discuss the algorithm properties and performance also in terms of convergence iterations and computing times.  相似文献   

14.
This paper introduces the confidence interval estimate for measuring the bullwhip effect, which has been observed across most industries. Calculating a confidence interval usually needs the assumption about the underlying distribution. Bootstrapping is a non-parametric, but computer intensive, estimation method. In this paper, a simulation study on the behavior of the 95% bootstrap confidence interval for estimating bullwhip effect is made. Effects of sample size, autocorrelation coefficient of customer demand, lead time, and bootstrap methods on the 95% bootstrap confidence interval of bullwhip effect are presented and discussed.  相似文献   

15.
This article considers the problem of binary classification and its assessment in a distribution-free approach. We estimate the area under the ROC curve (a more general performance metric than the error rate) of a classifier using a bootstrap-based estimator. We then use the method of the influence function to estimate the uncertainty of that estimate from the very same bootstrap samples. Monte Carlo trials show that small-sample estimates can be obtained with little bias.  相似文献   

16.
In this paper, we are interested in the estimation of the reliability coefficient R=P(X>Y), when the data on the minimum of two exponential samples, with random sample size, are available. The confidence intervals of R, based on maximum likelihood and bootstrap methods, are developed. The performance of these confidence intervals is studied through extensive simulation. A numerical example, based on a real data, is presented to illustrate the implementation of the proposed procedure.  相似文献   

17.
The half-life is defined as the number of periods required for the impulse response to a unit shock to a time series to dissipate by half. It is widely used as a measure of persistence, especially in international economics to quantify the degree of mean-reversion of the deviation from an international parity condition. Several studies have proposed bias-corrected point and interval estimation methods. However, they have found that the confidence intervals are rather uninformative with their upper bound being either extremely large or infinite. This is largely due to the distribution of the half-life estimator being heavily skewed and multi-modal. A bias-corrected bootstrap procedure for the estimation of half-life is proposed, adopting the highest density region (HDR) approach to point and interval estimation. The Monte Carlo simulation results reveal that the bias-corrected bootstrap HDR method provides an accurate point estimator, as well as tight confidence intervals with superior coverage properties to those of its alternatives. As an application, the proposed method is employed for half-life estimation of the real exchange rates of 17 industrialized countries. The results indicate much faster rates of mean-reversion than those reported in previous studies.  相似文献   

18.
This paper proposes a new method of interval estimation for the long run response (or elasticity) parameter from a general linear dynamic model. We employ the bias-corrected bootstrap, in which small sample biases associated with the parameter estimators are adjusted in two stages of the bootstrap. As a means of bias-correction, we use alternative analytic and bootstrap methods. To take atypical properties of the long run elasticity estimator into account, the highest density region (HDR) method is adopted for the construction of confidence intervals. From an extensive Monte Carlo experiment, we found that the HDR confidence interval based on indirect analytic bias-correction performs better than other alternatives, providing tighter intervals with excellent coverage properties. Two case studies (demand for oil and demand for beef) illustrate the results of the Monte Carlo experiment with respect to the superior performance of the confidence interval based on indirect analytic bias-correction.  相似文献   

19.
Accurate estimation of reliability of a system is a challenging task when only limited samples are available. This paper presents the use of the bootstrap method to safely estimate the reliability with the objective of obtaining a conservative but not overly conservative estimate. The performance of the bootstrap method is compared with alternative conservative estimation methods, based on biasing the distribution of system response. The relationship between accuracy and conservativeness of the estimates is explored for normal and lognormal distributions. In particular, detailed results are presented for the case when the goal has a 95% likelihood to be conservative. The bootstrap approach is found to be more accurate for this level of conservativeness. We explore the influence of sample size and target probability of failure on the quality of estimates, and show that for a given level of conservativeness, small sample sizes and low probabilities of failure can lead to a high likelihood of large overestimation. However, this likelihood can be reduced by increasing the sample size. Finally, the conservative approach is applied to the reliability-based optimization of a composite panel under thermal loading.  相似文献   

20.
Some work has been done in the past on the estimation of parameters of the three-parameter lognormal distribution based on complete and censored samples. In this article, we develop inferential methods based on progressively Type-II censored samples from a three-parameter lognormal distribution. In particular, we use the EM algorithm as well as some other numerical methods to determine maximum likelihood estimates (MLEs) of parameters. The asymptotic variances and covariances of the MLEs from the EM algorithm are computed by using the missing information principle. An alternative estimator, which is a modification of the MLE, is also proposed. The methodology developed here is then illustrated with some numerical examples. Finally, we also discuss the interval estimation based on large-sample theory and examine the actual coverage probabilities of these confidence intervals in case of small samples by means of a Monte Carlo simulation study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号