共查询到20条相似文献,搜索用时 10 毫秒
1.
We derive pointwise exact bootstrap distributions of ROC curves and the difference between ROC curves for threshold and vertical averaging. From these distributions, pointwise confidence intervals are derived and their performance is measured in terms of coverage accuracy. Improvements over techniques currently in use are obtained, in particular in the extremes of ROC curves where we show that typical drastic falls in coverage accuracy can be avoided. 相似文献
2.
Jinming Wu 《Computers & Mathematics with Applications》2011,61(5):1425-1430
In this paper, we present an new approach to construct the so-called shape preserving interpolation curves. The basic idea is first to approximate the set of interpolated points with a class of MQ quasi-interpolation operators and then pass through the set with the use of multivariate interpolation by using compactly supported radial basis functions. This approach possesses the advantages of certain shape preserving and good approximation behaviors. The proposed algorithm is easy to implement. 相似文献
3.
An approach to the design of effective computer-based systems is discussed. This approach exploits the user's traditional diagrammatic notations in an effort to achieve usability for experts other than computer professionals. Notations are formalized as visual languages, thus allowing the design of visual editors, interpreters, and compilers. The users themselves exploit these tools to define a hierarchy of environments by a bootstrapping approach. By navigating within these environments, they can progressively design visual interfaces and computing tools that allow them not only to execute the required computational tasks, but also to gain insight into and control the computational process, and check the results. 相似文献
4.
Hutson AD 《Computer methods and programs in biomedicine》2004,73(2):129-134
In this note, we outline a simple to use yet powerful bootstrap algorithm for handling correlated outcome variables in terms of either hypothesis testing or confidence intervals using only the marginal models. This new method can handle combinations of continuous and discrete data and can be used in conjunction with other covariates in a model. The procedure is based upon estimating the family-wise error (FWE) rate and then making a Bonferroni-type correction. A simulation study illustrates the accuracy of the algorithm over a variety of correlation structures. 相似文献
5.
P. Schaefer M. Boocock S. Rosenberg M. Jger Kh. Schaub 《International Journal of Industrial Ergonomics》2007,37(11-12):893
A new procedure for determining the risk of injury associated with manual pushing and pulling was developed based upon characteristics of the user population (i.e. age, gender and stature) and task requirements (i.e. working height, task frequency and travel distance). The procedure has been integrated into international (ISO, 2004) and European (CEN, 2004) standards for determining recommended force limits for pushing and pulling that can be adapted to suit the user population. These limits consider the muscular strength of the intended target population, as well as the compressive loads on the lumbar spine. Examples are provided to demonstrate variability of the proposed ‘safety’ limits for different task scenarios.
Relevance to industry
The manual handling of physical loads are known risk factors associated with work-related musculoskeletal disorders (WMSD). These disorders are common throughout the industry and may incur considerable costs to both the employer and the employee. The new risk rating procedure enables pushing and pulling tasks to be more closely aligned to the capabilities of the user population and, therefore, has an important role to play in helping to reduce the suffering and costs associated with these disorders. 相似文献6.
Alicia Pérez-Alonso 《Computational statistics & data analysis》2007,51(7):3484-3504
A possible approach to test for conditional symmetry in time series regression models is discussed. To that end, the Bai and Ng test is utilized. The performance of some popular (unconditional) symmetry tests for observations when applied to regression residuals is also examined. The tests considered include the coefficient of skewness, a joint test of the third and fifth moments, the Runs test, the Wilcoxon signed-rank test and the Triples test. An easy-to-implement symmetric bootstrap procedure is proposed to calculate critical values for these tests. Consistency of the bootstrap procedure will be shown. A simple Monte Carlo experiment is conducted to explore the finite-sample properties of all the tests. 相似文献
7.
To infer on functional dependence of regression parameters, a new, factor based bootstrap approach is introduced, that is robust under various forms of heteroskedastic error terms. Modeling the functional coefficient parametrically, the bootstrap approximation of an F-statistic is shown to hold asymptotically. In simulation studies with both parametric and nonparametric functional coefficients, factor based bootstrap inference outperforms the wild bootstrap and pairs bootstrap approach, according to its rejection frequencies under the null hypothesis. Applying the functional coefficient model to a cross sectional investment regression on savings, the saving retention coefficient is found to depend on third variables as the population growth rate and the openness ratio. 相似文献
8.
9.
This article is about testing the equality of several normal means when the variances are unknown and arbitrary, i.e., the set up of the one-way ANOVA. Even though several tests are available in the literature, none of them perform well in terms of Type I error probability under various sample size and parameter combinations. In fact, Type I errors can be highly inflated for some of the commonly used tests; a serious issue that appears to have been overlooked. We propose a parametric bootstrap (PB) approach and compare it with three existing location-scale invariant tests—the Welch test, the James test and the generalized F (GF) test. The Type I error rates and powers of the tests are evaluated using Monte Carlo simulation. Our studies show that the PB test is the best among the four tests with respect to Type I error rates. The PB test performs very satisfactorily even for small samples while the Welch test and the GF test exhibit poor Type I error properties when the sample sizes are small and/or the number of means to be compared is moderate to large. The James test performs better than the Welch test and the GF test. It is also noted that the same tests can be used to test the significance of the random effect variance component in a one-way random model under unequal error variances. Such models are widely used to analyze data from inter-laboratory studies. The methods are illustrated using some examples. 相似文献
10.
《Computational statistics & data analysis》2008,52(12):5731-5742
This article is about testing the equality of several normal means when the variances are unknown and arbitrary, i.e., the set up of the one-way ANOVA. Even though several tests are available in the literature, none of them perform well in terms of Type I error probability under various sample size and parameter combinations. In fact, Type I errors can be highly inflated for some of the commonly used tests; a serious issue that appears to have been overlooked. We propose a parametric bootstrap (PB) approach and compare it with three existing location-scale invariant tests—the Welch test, the James test and the generalized F (GF) test. The Type I error rates and powers of the tests are evaluated using Monte Carlo simulation. Our studies show that the PB test is the best among the four tests with respect to Type I error rates. The PB test performs very satisfactorily even for small samples while the Welch test and the GF test exhibit poor Type I error properties when the sample sizes are small and/or the number of means to be compared is moderate to large. The James test performs better than the Welch test and the GF test. It is also noted that the same tests can be used to test the significance of the random effect variance component in a one-way random model under unequal error variances. Such models are widely used to analyze data from inter-laboratory studies. The methods are illustrated using some examples. 相似文献
11.
Half-life estimation based on the bias-corrected bootstrap: A highest density region approach 总被引:1,自引:0,他引:1
Jae H. Kim Param Silvapulle Rob J. Hyndman 《Computational statistics & data analysis》2007,51(7):3418-3432
The half-life is defined as the number of periods required for the impulse response to a unit shock to a time series to dissipate by half. It is widely used as a measure of persistence, especially in international economics to quantify the degree of mean-reversion of the deviation from an international parity condition. Several studies have proposed bias-corrected point and interval estimation methods. However, they have found that the confidence intervals are rather uninformative with their upper bound being either extremely large or infinite. This is largely due to the distribution of the half-life estimator being heavily skewed and multi-modal. A bias-corrected bootstrap procedure for the estimation of half-life is proposed, adopting the highest density region (HDR) approach to point and interval estimation. The Monte Carlo simulation results reveal that the bias-corrected bootstrap HDR method provides an accurate point estimator, as well as tight confidence intervals with superior coverage properties to those of its alternatives. As an application, the proposed method is employed for half-life estimation of the real exchange rates of 17 industrialized countries. The results indicate much faster rates of mean-reversion than those reported in previous studies. 相似文献
12.
Catchment models simulate water and solute dynamics at catchment scales and are invaluable tools for natural resource management. Parameters for catchment models can provide useful information about the importance of the hydrological processes involved. We propose and demonstrate a bootstrap approach to assess parameter uncertainty in dynamic catchment models. This approach, which is non-Bayesian and essentially non-parametric, requires no distributional assumptions about parameters and only weak assumptions about the distributional form of the model residuals. It is able to handle autocorrelated model errors which are very common in the application of dynamic hydrological models at catchment scales. The ability of our bootstrap approach to assess parameter uncertainty is demonstrated using numerical experiments with the abc hydrological model and an application of a conceptual model of salt load from an irrigated catchment in southeastern Australia. 相似文献
13.
This note concerns the construction of bootstrap simultaneous confidence intervals (SCI) for m parameters. Given B bootstrap samples, we suggest an algorithm with complexity of O(mBlog(B)). We apply our algorithm to construct a confidence region for time dependent probabilities of progression in multiple sclerosis and for coefficients in a logistic regression analysis. Alternative normal based simultaneous confidence intervals are presented and compared to the bootstrap intervals. 相似文献
14.
Dexter O. Cahoy 《Computational statistics & data analysis》2010,54(10):2306-2950
We introduce a bootstrap procedure to test the hypothesis Ho that K+1 variances are homogeneous. The procedure uses a variance-based statistic, and is derived from a normal-theory test for equality of variances. The test equivalently expressed the hypothesis as , where ηi’s are log contrasts of the population variances. A box-type acceptance region is constructed to test the hypothesis Ho. Simulation results indicated that our method is generally superior to the Shoemaker and Levene tests, and the bootstrapped version of the Levene test in controlling the Type I and Type II errors. 相似文献
15.
For constructing simultaneous confidence intervals for the ratios of means of several lognormal distributions, we propose a new parametric bootstrap method, which is different from an inaccurate parametric bootstrap method previously considered in the literature. Our proposed method is conceptually simpler than other proposed methods, which are based on the concepts of generalized pivotal quantities and fiducial generalized pivotal quantities. Also, our extensive simulation results indicate that our proposed method consistently performs better than other methods: its coverage probability is close to the nominal confidence level and the resulting intervals are typically shorter than the intervals produced by other methods. 相似文献
16.
Torii André Jacomel Lopez Rafael Holdorf Beck André Teófilo Miguel Leandro Fleck Fadel 《Structural and Multidisciplinary Optimization》2019,60(3):927-947
Structural and Multidisciplinary Optimization - In recent years, several approaches have been proposed for solving reliability-based design optimization (RBDO), where the probability of failure is... 相似文献
17.
Holger Dette 《Computational statistics & data analysis》2009,53(4):1339-1349
The difference between the regression functions of two stationary conditional heteroskedastic autoregressive time series is tested. The functions can be equal, or shifted, under the null hypothesis. Local linear estimation of the regression function results in observable residuals. Bootstrap residuals lead to a marked empirical process as test statistic and a Kolmogorov-Smirnov version is applied. The simulation study for linear, exponential or trigonometric regression functions with homoskedastic or heteroskedastic errors finds the rejection probability under the null hypothesis to be near the level. Comparing series with different combinations of linear, exponential and trigonometric functions, the rejection probability under the alternative yields mixed results. 相似文献
18.
19.
We present an approach to the optimal fitting of a biarc-spline to a given B-spline curve. The objective is to minimize the area between the original B-spline curve and the fitted curve. Such an objective has obvious practical implications. This approach differs from conventional biarc curve-fitting techniques in two main aspects and has some desirable features. Firstly, it exploits the inherent freedom in the choice of the biarc that can be fitted to a given pair of end-points and their tangents. The conventional approach to biarc curve-fitting introduces additional constraints, such as the minimal difference in curvature or others to uniquely determine successive biarcs. In this approach, such constraints are not imposed. Instead, the freedom is exploited in the problem formulation to achieve a better fit. Secondly, the end-points do not lie on the curve so that appropriate tolerance control can be imposed through the use of additional constraints. Almost all previous biarc-fitting methods consider end-points that are on the original curve. As a result of these two aspects, the resulting biarc curve fits closely to the original curve with relatively fewer segments. This has a desirable effect on the surface finish, verification of CNC codes and memory requirement. Numerical results of the application of this approach to several examples are presented. 相似文献
20.
Jos A. Villaseor-Alva Elizabeth Gonzlez-Estrada 《Computational statistics & data analysis》2009,53(11):3835-3841
This paper proposes a bootstrap goodness of fit test for the Generalized Pareto distribution (GPd) with shape parameter γ. The proposed test is an intersection–union test which tests separately the cases of γ≥0 and γ<0 and rejects if both cases are rejected. If the test does not reject, then it is known whether the shape parameter γ is either positive or negative. A Monte Carlo simulation experiment was conducted to assess the power of performance of the intersection–union test. The GPd hypothesis was tested on a data set containing Mexico City’s ozone levels. 1 相似文献