首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A new univariate sampling approach for bootstrapping correlation coefficients is proposed and evaluated. Bootstrapping correlations to define confidence intervals or to test hypotheses has previously relied on repeated bivariate sampling of observed (x,y) values to create an empirical sampling distribution. Bivariate sampling matches the logic of confidence interval construction, but hypothesis testing logic suggests that x and y should be sampled independently. This study uses Monte Carlo methods to compare the univariate bootstrap with 3 bivariate bootstrap procedures and with the traditional parametric procedure, using various sample sizes, population correlations, and population distributions. Results suggest that the univariate bootstrap is superior to other bootstrap procedures in many hypothesis testing settings, and even improves on parametric hypothesis testing in certain cases. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
The bootstrap, a computer-intensive approach to statistical data analysis, has been recommended as an alternative to parametric approaches. Advocates claim it is superior because it is not burdened by potentially unwarranted normal theory assumptions and because it retains information about the form of the original sample. Empirical support for its superiority, however, is quite limited. The present article compares the bootstrap and parametric approaches to estimating confidence intervals and Type I error rates of the correlation coefficient. The parametric approach is superior to the bootstrap under both assumption violation and nonviolation. The bootstrap results in overly restricted confidence intervals and overly liberal Type I error rates. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
It is argued that past research has indicated that the parametric correlation coefficient is superior to the bootstrap approach in terms of practical and statistical characteristics. Previous research, however, has not provided a critical test of the two techniques: They have not been compared under conditions in which the bootstrap could outperform the parametric approach. Such a comparison was carried out using nonindependent variates sampled from mixed-normal populations. Results indicated that the bootstrap has substantially better control of Type I error rates under some mixed-normal conditions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
The standard Pearson correlation coefficient is a biased estimator of the true population correlation, ρ, when the predictor and the criterion are range restricted. To correct the bias, the correlation corrected for range restriction, rc, has been recommended, and a standard formula based on asymptotic results for estimating its standard error is also available. In the present study, the bootstrap standard-error estimate is proposed as an alternative. Monte Carlo simulation studies involving both normal and nonnormal data were conducted to examine the empirical performance of the proposed procedure under different levels of ρ, selection ratio, sample size, and truncation types. Results indicated that, with normal data, the bootstrap standard-error estimate is more accurate than the traditional estimate, particularly with small sample size. With nonnormal data, performance of both estimates depends critically on the distribution type. Furthermore, the bootstrap bias-corrected and accelerated interval consistently provided the most accurate coverage probability for ρ. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
The performance of selected parametric and nonparametric tests of location and scale differences is compared on the basis of sampling results from simulated populations of Likert-scale response ordinal data. Additionally, an omnibus test of distributional equality was examined. Type I and Type II error rates for all procedures examined do not indicate any clear-cut superiority for either type of test (parametric vs. nonparametric). Moreover, when sampling from disparate populations, the nonparametric tests of median equality are as sensitive to heterogeneous variances as the parametric test of mean equality. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
The authors conducted Monte Carlo simulations to investigate whether indirect range restriction (IRR) on 2 variables X and Y increases the sampling error variability in the correlation coefficient between them. The manipulated parameters were (a) IRR on X and Y (i.e., direct restriction on a third variable Z), (b) population correlations ρxy, ρxz, and ρyz and (c) sample size. IRR increased the sampling error variance in rxy to values as high as 8.50% larger than the analytically derived expected values. Thus, in the presence of IRR, validity generalization users need to make theory-based decisions to ascertain whether the effects of IRR are artifactual or caused by situational-specific moderating effects. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Advances in testing the statistical significance of mediation effects.   总被引:1,自引:0,他引:1  
P. A. Frazier, A. P. Tix, and K. E. Barron (2004) highlighted a normal theory method popularized by R. M. Baron and D. A. Kenny (1986) for testing the statistical significance of indirect effects (i.e., mediator variables) in multiple regression contexts. However, simulation studies suggest that this method lacks statistical power relative to some other approaches. The authors describe an alternative developed by P. E. Shrout and N. Bolger (2002) based on bootstrap resampling methods. An example and step-by-step guide for performing bootstrap mediation analyses are provided. The test of joint significance is also briefly described as an alternative to both the normal theory and bootstrap methods. The relative advantages and disadvantages of each approach in terms of precision in estimating confidence intervals of indirect effects, Type I error, and Type II error are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Most studies that have investigated the use of coarsely grained scales have indicated that the accuracy of statistics calculated on such scales is not compromised as long as the scales have about 5 or more points. Gregoire and Driver (1987), however, found serious perturbances of the Type I and Type II error rates using a 5-point scale. They carried out three computer simulation experiments in which continuous data were transformed to Likert-scale values. Two of the three experiments are shown to be flawed because the authors incorrectly specified the population mean in their simulation. This article corrects the flaw and demonstrates that the Type I and Type II error rates are not seriously compromised by the use of ordinal-scale data. Furthermore, Gregoire and Driver's results are reinterpreted to show that in most cases, the parametric test of location equality shows a power superiority to the nonparametric tests. Only in their most nonnormal simulation does a nonparametric test show a power superiority. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Power differences between multivariate and -adjusted univariate tests are presented for various configurations and sizes of population means, degrees of nonsphericity, and sample sizes for small and large repeated measures designs with 1 within-subjects factor. The results are applicable to various designs (e.g., longitudinal). Power differences were calculated by adapting procedures presented in K. E. Muller and C. N. Barton (1989, 1991). The results demonstrate that, for parametric conditions likely to be encountered by psychological researchers, the differences between the 2 approaches can be considerable. The authors recommend that sample size be chosen according to the procedures enumerated by Muller and Barton but provide simple guidelines for use when the information required by the Muller-Barton approach is not available. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
RATIONALE AND OBJECTIVES: The authors performed this study to address two practical questions. First, how large does the sample size need to be for confidence intervals (CIs) based on the usual asymptotic methods to be appropriate? Second, when the sample size is smaller than this threshold, what alternative method of CI construction should be used? MATERIALS AND METHODS: The authors performed a Monte Carlo simulation study where 95% CIs were constructed for the receiver operating characteristic (ROC) area and for the difference between two ROC areas for rating and continuous test results--for ROC areas of moderate and high accuracy--by using both parametric and nonparametric estimation methods. Alternative methods evaluated included several bootstrap CIs and CIs with the Student t distribution. RESULTS: For the difference between two ROC areas, CIs based on the asymptotic theory provided adequate coverage even when the sample size was very small (20 patients). In contrast, for a single ROC area, the asymptotic methods do not provide adequate CI coverage for small samples; for ROC areas of high accuracy, the sample size must be large (more than 200 patients) for the asymptotic methods to be applicable. The recommended alternative (bootstrap percentile, bootstrap t, or bootstrap bias-corrected accelerated method) depends on the estimation approach, format of the test results, and ROC area. CONCLUSION: Currently, there is not a single best alternative for constructing CIs for a single ROC area for small samples.  相似文献   

11.
Meta-analytic procedures allow for determining best estimates of the individual-level, the within-organization, and the organizational-level population correlations. In most validity generalization work, meta-analytic procedures have been used to provide best estimates of the within-organization correlation. However, in many other organizational domains, researchers often do not clearly specify which population parameter is of interest. Further, researchers often combine studies in which data were collected at different levels of analysis or with mixed (single- and multiple-organization) sampling schemes, making it difficult to interpret unambiguously the meta-analytic ρ?. The authors focus on how to make appropriate inferences from meta-analytic studies by integrating a levels-of-analysis framework with meta-analytic techniques, highlighting how meta-analytic procedures can aid researchers in better understanding multilevel relationships among organizational constructs. The authors provide recommendations for clearer specifications of populations and levels issues in future meta-analytic studies. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Bootstrapping is introduced as a method for approximating the standard errors of validity generalization (VG) estimates. A Monte Carlo study was conducted to evaluate the accuracy of bootstrap validity-distribution parameter estimates, bootstrap standard error estimates, and nonparametric bootstrap confidence intervals. In the simulation study the authors manipulated the sample sizes per correlation coefficient, the number of coefficients per VG analysis, and the variance of the distribution of true correlation coefficients. The results indicate that the standard error estimates produced by the bootstrapping procedure were very accurate. It is recommended that the bootstrap standard-error estimates and confidence intervals be used in the interpretation of the results of VG analyses. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Structure coefficients from discriminant analyses assist the substantive interpretation with rules of thumb indicating whether variables load on discriminant functions. A variable's structure coefficient should at least be statistically different from zero before it is interpreted. Unfortunately, standard errors of structure coefficients are not available. However, the bootstrap and the jackknife procedures provide statistical tests in such circumstances. Bootstrap and jackknife analyses of an example data set obtained different interpretations from those using the usual rules of thumb. Results from Monte Carlo studies, in terms of Type I error rates and confidence interval coverage, did not support the usual rules of thumb and clearly showed the superiority of the standard bootstrap test and the bootstrap percentile confidence interval. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
The authors argue that a robust version of Cohen's effect size constructed by replacing population means with 20% trimmed means and the population standard deviation with the square root of a 20% Winsorized variance is a better measure of population separation than is Cohen's effect size. The authors investigated coverage probability for confidence intervals for the new effect size measure. The confidence intervals were constructed by using the noncentral t distribution and the percentile bootstrap. Over the range of distributions and effect sizes investigated in the study, coverage probability was better for the percentile bootstrap confidence interval. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Population models were developed to analyze processes, described by parametric models, from measurements obtained in a sample of individuals. In order to analyze the sources of interindividual variability, covariates may be incorporated in the population analysis. The exploratory analyses and the two-stage approaches which use standard non-linear regression techniques are simple tools to select meaningful covariates. The global population approaches may be divided into two classes within which the covariates are handled differently: the parametric and the non-parametric methods. The power as well as the limitations of each approach regarding handling of covariates are illustrated and compared using the same data set which concerns the pharmacokinetics of gentamicin in neonates. With parametric approaches a second-stage model between structural parameters and covariates has to be defined. In the non-parametric method the joint distribution of parameters and covariates is estimated without parametric assumptions; however, it is assumed that covariates are observed with some error and parameters involved in functional relationships are not estimated. The important results concerning gentamicin in neonates were found by the two methods.  相似文献   

16.
One approach to the analysis of repeated measures data allows researchers to model the covariance structure of the data rather than presume a certain structure, as is the case with conventional univariate and multivariate test statistics. This mixed-model approach was evaluated for testing all possible pairwise differences among repeated measures marginal means in a Between-Subjects?×?Within-Subjects design. Specifically, the authors investigated Type I error and power rates for a number of simultaneous and stepwise multiple comparison procedures using SAS (1999) PROC MIXED in unbalanced designs when normality and covariance homogeneity assumptions did not hold. J. P. Shaffer's (1986) sequentially rejective step-down and Y. Hochberg's (1988) sequentially acceptive step-up Bonferroni procedures, based on an unstructured covariance structure, had superior Type I error control and power to detect true pairwise differences across the investigated conditions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
One of the most frequently cited reasons for conducting a meta-analysis is the increase in statistical power that it affords a reviewer. This article demonstrates that fixed-effects meta-analysis increases statistical power by reducing the standard error of the weighted average effect size (T?.) and, in so doing, shrinks the confidence interval around T?.. Small confidence intervals make it more likely for reviewers to detect nonzero population effects, thereby increasing statistical power. Smaller confidence intervals also represent increased precision of the estimated population effect size. Computational examples are provided for 3 effect-size indices: d (standardized mean difference), Pearson's r, and odds ratios. Random-effects meta-analyses also may show increased statistical power and a smaller standard error of the weighted average effect size. However, the authors demonstrate that increasing the number of studies in a random-effects meta-analysis does not always increase statistical power. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Examined 4 procedures for analyzing 0–2 data in repeated measurements designs for various combinations of sample size, number of treatments, and degree of heterogeneity of covariance. The test statistics were Cochran's Q test, the univariate F test, and Q and F statistics adjusted for heterogeneous covariances. Type I and Type II error rates based on computer simulations indicated problems with sample sizes of less than 16; for larger samples, Q+ and F+ gave honest Type I error rates, even under conditions of extreme heterogeneity of covariance. (21 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Willingness to pay (WTP) for a health care program can be estimated in contingent valuation (CV) studies by a nonparametric approach. The nonparametric approach is free from distributional assumptions, which is a strength compared with parametric regression-based approaches. However, using a nonparametric approach it is not clear how to obtain confidence statements for WTP estimates, for example, when testing hypotheses regarding differences in mean WTP for different subsamples. The authors propose a procedure that allows statistical testing and confidence interval estimation by employing bootstrap techniques. The method is easy to implement and has low computational costs with modern personal computers. The method is applied to data from a CV study where the WTP for hormone replacement therapy was investigated. The mean WTP was estimated for the full sample and separately for women with mild and severe menopausal symptoms. Using the proposed method, the mean WTP was significantly higher in the group with severe symptoms.  相似文献   

20.
Conventional statistical approaches rely heavily on the properties of the central limit theorem to bridge the gap between the characteristics of a sample and some theoretical sampling distribution. Problems associated with nonrandom sampling, unknown population distributions, heterogeneous variances, small sample sizes, and missing data jeopardize the assumptions of such approaches and cast skepticism on conclusions. Conventional nonparametric alternatives offer freedom from distribution assumptions, but design limitations and loss of power can be serious drawbacks. With the data-processing capacity of today's computers, a new dimension of distribution-free statistical methods has evolved that addresses many of the limitations of conventional parametric and nonparametric methods. Computer-intensive statistical methods involve reshuffling, resampling, or simulating a data set thousands of times to empirically define a sampling distribution for a chosen test statistic. The only assumption necessary for valid results is the random assignment of experimental units to the test groups or treatments. Application to a real data set illustrates the advantages of these methods, including freedom from distribution assumptions without loss of power, complete choice over test statistics, easy adaptation to design complexities and missing data, and considerable intuitive appeal. The illustrations also reveal that computer-intensive methods can be more time consuming than conventional methods and the amount of computer code required to orchestrate reshuffling, resampling, or simulation procedures can be appreciable.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号