首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Compares the accuracy of several formulas for the standard error of the mean uncorrected correlation in meta-analytic and validity generalization studies. The effect of computing the mean correlation by weighting the correlation in each study by its sample size is also studied. On the basis of formal analysis and simulation studies, it is concluded that the common formula for the sampling variance of the mean correlation, Vr ?=?Vr/K where K is the number of studies in the meta-analysis, gives reasonably accurate results. This formula gives accurate results even when sample sizes and ρs are unequal and regardless of whether or not the statistical artifacts vary from study to study. It is also shown that using sample-size weighting may result in underestimation of the standard error of the mean uncorrected correlation when there are outlier sample sizes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
A series of Monte Carlo computer simulations was conducted to investigate (a) the likelihood that meta-analysis will detect true differences in effect sizes rather than attributing differences to methodological artifact and (b) the likelihood that meta-analysis will suggest the presence of moderator variables when in fact differences in effect sizes are due to methodological artifact. The simulations varied the magnitude of the true population differences between correlations, the number of studies included in the meta-analysis, and the average sample size. Simulations were run both correcting and not correcting for measurement error. The power of 3 indices—the Schmidt-Hunter ratio of expected to observed variance, the Callender-Osburn procedure, and a chi-square test—to detect true differences was investigated. Results show that small true differences were not detected regardless of sample size and the number of studies and that moderate true differences were not detected with small numbers of studies or small sample sizes. Hence, there is a need for caution in attributing observed variation across studies to artifact. (9 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Averaging correlations leads to underestimation because the sampling distribution of the correlation coefficient is skewed. It is also known that if correlations are transformed by Fisher's z prior to averaging, the resulting average overestimates the population value of z. The behavior of these procedures for averaging correlations was investigated via Monte Carlo simulation, both in terms of bias (under- and overestimation) and precision (standard errors). It was found that average z backtransformed to r is less biased positively than average r is biased negatively. The standard error of average r was smaller than that of average z when the population correlation was small; however, the reverse was true when the population correlation exceeded .5. Regardless of sample size, back transformed average z was always less biased; therefore, the use of the z transformation is recommended when averaging correlation coefficients, particularly when sample size is small. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
The efficacy of the Hedges and colleagues, Rosenthal-Rubin, and Hunter-Schmidt methods for combining correlation coefficients was tested for cases in which population effect sizes were both fixed and variable. After a brief tutorial on these meta-analytic methods, the author presents 2 Monte Carlo simulations that compare these methods for cases in which the number of studies in the meta-analysis and the average sample size of studies were varied. In the fixed case the methods produced comparable estimates of the average effect size; however, the Hunter-Schmidt method failed to control the Type I error rate for the associated significance tests. In the variable case, for both the Hedges and colleagues and Hunter-Schmidt methods, Type I error rates were not controlled for meta-analyses including 15 or fewer studies and the probability of detecting small effects was less than .3. Some practical recommendations are made about the use of meta-analysis. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
It has long been claimed that people perceive the world as less variable and more regular than it actually is. Such misperception, if shown to exist, could explain some perplexing behaviors. However, evidence supporting the claim is indirect, and there is no explanation of its cause. As a possible cause, the authors suggest that people use sample variability as an estimate of population variability. This is so because the sampling distribution of sample variance is downward attenuated, the attenuation being substantial for sample sizes that people consider. Results of 5 experiments show that people use sample variability, uncorrected for sample size, in tasks in which a correction is normatively called for, and indeed perceive variability as smaller than it is. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
One conceptualization of meta-analysis is that studies within the meta-analysis are sampled from populations with mean effect sizes that vary (random-effects models). The consequences of not applying such models and the comparison of different methods have been hotly debated. A Monte Carlo study compared the efficacy of Hedges and Vevea's random-effects methods of meta-analysis with Hunter and Schmidt's, over a wide range of conditions, as the variability in population correlations increases. (a) The Hunter-Schmidt method produced estimates of the average correlation with the least error, although estimates from both methods were very accurate; (b) confidence intervals from Hunter and Schmidt's method were always slightly too narrow but became more accurate than those from Hedges and Vevea's method as the number of studies included in the meta-analysis, the size of the true correlation, and the variability of correlations increased; and (c) the study weights did not explain the differences between the methods. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
This study used Monte Carlo simulation to examine the increase in accuracy resulting from 2 statistical refinements of the interactive Schmidt-Hunter procedures for meta-analysis: the use of the mean correlation instead of individual correlations in the estimation of sampling error variance, and a procedure that takes into account the nonlinear nature of the range-restriction correction. In all of the cases examined, these refinements increased the accuracy of the interactive procedure in estimating the variance of population correlations and resulted in more accuracy than other procedures examined. The use of the mean correlation in the sampling error variance formula also increased the accuracy of variance estimates for the multiplicative and Taylor Series procedures. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
The authors discuss potential confusion in conducting primary studies and meta-analyses on the basis of differences between groups. First, the authors show that a formula for the sampling error of the standardized mean difference (d) that is based on equal group sample sizes can produce substantially biased results if applied with markedly unequal group sizes. Second, the authors show that the same concerns are present when primary analyses or meta-analyses are conducted with point-biserial correlations, as the point-biserial correlation (r) is a transformation of d. Third, the authors examine the practice of correcting a point-biserial r for unequal sample sizes and note that such correction would also increase the sampling error of the corrected r. Correcting rs for unequal sample sizes, but using the standard formula for sampling error in uncorrected r, can result in bias. The authors offer a set of recommendations for conducting meta-analyses of group differences. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

9.
Missing effect-size estimates pose a particularly difficult problem in meta-analysis. Rather than discarding studies with missing effect-size estimates or setting missing effect-size estimates equal to 0, the meta-analyst can supplement effect-size procedures with vote-counting procedures if the studies report the direction of results or the statistical significance of results. By combining effect-size and vote-counting procedures, the meta-analyst can obtain a less biased estimate of the population effect size and a narrower confidence interval for the population effect size. This article describes 3 vote-counting procedures for estimating the population correlation coefficient in studies with missing sample correlations. Easy-to-use tables, based on equal sample sizes, are presented for the 3 procedures. More complicated vote-counting procedures also are given for unequal sample sizes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Each of several Monte Carlo simulations generated 100 sets of observed study correlations based on normal, heteroscedastic, or slightly nonlinear bivariate distributions, with one population correlation coefficient and true variance of 0. A version of J. E. Hunter and F. L. Schmidt's (1990b) meta-analysis was applied to each study set. Within simulations, ρ? was accurate on average. σ?2ρ was biased; one would correctly conclude more than half the time that no moderator effects existed. However, cases of variation in ρ? and especially in σ?2ρ indicated that results from individual meta-analyses could deviate substantially from what was found on average. Findings for these no-moderator cases offer applied psychologists some guidelines and cautions when drawing conclusions about true population correlations and true moderator effects (e.g., situational specificity, validity generalization) from meta-analytic results. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Research in validity generalization has generated renewed interest in the sampling error of the Pearson correlation coefficient. The standard estimator for the sampling variance of the correlation was derived under assumptions that do not consider the presence of measurement error or range restriction in the data. The accuracy of the estimator in attenuated or restricted data has not been studied. This article presented the results of computer simulations that examined the accuracy of the sampling variance estimator in data containing measurement error. Sample sizes of n?=?25, n?=?60, and n?=?100 are used, with the reliability ranging from .10 to 1.00, and the population correlation ranging from .10 to 0.90. Results demonstrated that the estimator has a slight negative bias, but may be sufficiently accurate for practical applications if the sample size is at least 60. In samples of this size, the presence of measurement error does not add greatly to the inaccuracy of the estimator. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Estimation of the log-normal mean   总被引:1,自引:0,他引:1  
The most commonly used estimator for a log-normal mean is the sample mean. In this paper, we show that this estimator can have a large mean square error, even for large samples. Then, we study three main alternative estimators: (i) a uniformly minimum variance unbiased (UMVU) estimator; (ii) a maximum likelihood (ML) estimator; (iii) a conditionally minimal mean square error (MSE) estimator. We find that the conditionally minimal MSE estimator has the smallest mean square error among the four estimators considered here, regardless of the sample size and the skewness of the log-normal population. However, for large samples (n > or = 200), the UMVU estimator, the ML estimator, and the conditionally minimal MSE estimators have very similar mean square errors. Since the ML estimator is the easiest to compute among these three estimators, for large samples we recommend the use of the ML estimator. For small to moderate samples, we recommend the use of the conditionally minimal MSE estimator.  相似文献   

13.
Validity generalization methods require accurate estimates of the sampling variance in the correlation coefficient when the range of variation in the data is restricted. This article presents the results of computer simulations examining the accuracy of the sampling variance estimator under sample range restrictions. Range restriction is assumed to occur by direct selection on the predictor. Sample sizes of 25, 60, and 100 are used, with the selection ratio ranging from .10 to 1.0 and the population correlation ranging from .10 to .90. The estimator is found to have a slight negative bias in unrestricted data. In restricted data, the bias is substantial in sample sizes of 60 or less. In all sample sizes, the negative bias increases as the selection ratio becomes smaller. Implications of the results for studies of validity generalization are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Coded 9 variables in a meta-analysis of 74 empirical studies of job satisfaction–job performance. Aggregated studies had an S sample size of 12,192 and 217 satisfaction–performance correlations. Findings show that (1) the best estimate of the true population correlation between satisfaction and performance was relatively low (.17); (2) much of the variability in results obtained in previously research was due to the use of small sample sizes, while unreliable measurement of the satisfaction and performance constructs has contributed relatively little to this observed variability in correlations; and (3) the 9 variables coded (composite vs unidimensional criteria, longitudinal vs cross-sectional measurement of performance relative to satisfaction, the nature of the performance measure, self-reports vs other sources, use of specific performance measures, subjectivity or objectivity of measures, specific-facet satisfaction vs global satisfaction, well-documented vs researcher-developed measurement, and white-collar vs blue-collar) were only modestly related to the magnitude of the satisfaction–performance correlation. (3 p ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
This study deals with some of the judgmental factors involved in selecting effect sizes from within the studies that enter a meta-analysis. Particular attention is paid to the conceptual redundancy rule that Smith, Glass, and Miller (1980) used in their study of the effectiveness of psychotherapy for deciding which effect sizes should and should not be counted in determining an overall effect size. Data from a random sample of 25 studies from Smith et al.'s (1980) population of psychotherapy outcome studies were first recoded and then reanalyzed meta-analytically. Using the conceptual redundancy rule, three coders independently coded effect sizes and identified more than twice as many of them per study as did Smith et al. Moreover, the treatment effect estimates associated with this larger sample of effects ranged between .30 and .50, about half the size claimed by Smith et al. Analyses of other rules for selecting effect sizes showed that average effect estimates also varied with these rules. Such results indicate that the average effect estimates derived from meta-analyses may depend heavily on judgmental factors that enter into how effect sizes are selected within each of the individual studies considered relevant to a meta-analysis. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Examined 6 statistics for estimating the population correlation ratio, ρ–2 (the proportionate reduction in error or variance explained index), for bias. Findings reveal that the expected value of an adjusted version of the sample correlation ratio, η?–2, was slightly less biased and less consistent than ω–2 or ε–2 with small population effects and sample sizes. A simplified method for generating approximate confidence intervals for ρ–2 was developed and found to be efficient relative to computation time. (25 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Widely used standard expressions for the sampling variance of intraclass correlations and genetic correlation coefficients were reviewed for small and large sample sizes. For the sampling variance of the intraclass correlation, it was shown by simulation that the commonly used expression, derived using a first-order Taylor series performs better than alternative expressions found in the literature, when the between-sire degrees of freedom were small. The expressions for the sampling variance of the genetic correlation are significantly biased for small sample sizes, in particular when the population values, or their estimates, are close to zero. It was shown, both analytically and by simulation, that this is because the estimate of the sampling variance becomes very large in these cases due to very small values of the denominator of the expressions. It was concluded, therefore, that for small samples, estimates of the heritabilities and genetic correlations should not be used in the expressions for the sampling variance of the genetic correlation. It was shown analytically that in cases where the population values of the heritabilities are known, using the estimated heritabilities rather than their true values to estimate the genetic correlation results in a lower sampling variance for the genetic correlation. Therefore, for large samples, estimates of heritabilities, and not their true values, should be used.  相似文献   

18.
An empirical attempt at demonstrating the bias in correlation coefficients that are corrected for both attenuation and range restriction was recently presented by R. Lee et al (see record 1983-02451-001). Using asymptotic methods, the present article analytically derives properties of the double-corrected correlation. It is shown that the double-corrected correlation is negatively biased. This negative bias decreases with increasing sample size and/or selection ratio. An expression for the standard error of the corrected correlation, useful for confidence interval estimation, is presented. Although the standard error of the corrected correlation is larger than that of the uncorrected correlation, the increase is proportionately smaller than the respective increase in the point estimate. Findings represent progress toward the request for full information about corrected correlations set forth in the Standards for Educational and Psychological Tests. (12 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
AMonte Carlo study was conducted to determine Types I and II error rates of the Schmidt and Hunter (S&H) meta-analysis method and the U statistic for assessing homogeneity within a set of correlations. One thousand samples of correlations were generated randomly to fill each of 450 cells of an 18?×?5?×?5 (Underlying Population Correlations?×?Numbers of Correlations Compared?×?Sample Size Per Correlation) design. To assess Type I error rates, correlations were drawn from the same population. To assess power, correlations were drawn from two different populations. As compared with U, which was uniformly robust, the Type I error rate for the S&H method was unacceptably high in many cells, particularly when the criterion for determining homogeneity was set at a highly conservative level. Power for the S&H method increased with increasing size of population differences, sample size per correlation, and in some cases, number of correlations compared. The U statistic did more poorly in most conditions in protecting from Type II errors. (14 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
A meta-analysis of single-item measures of overall job satisfaction (28 correlations from 17 studies with 7,682 people) found an average uncorrected correlation of .63 (SD?=?.09) with scale measures of overall job satisfaction. The overall mean correlation (corrected only for reliability) is .67 (SD?=?.08), and it is moderated by the type of measurement scale used. The mean corrected correlation for the best group of scale measures ( 8 correlations, 1,735 people) is .72 (SD?=?.05). The correction for attenuation formula was used to estimate the minimum level of reliability for a single-item measure. These estimates range from .45 to .69, depending on the assumptions made. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号