首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The currently available meta-analytic methods for correlations have restrictive assumptions. The fixed-effects methods assume equal population correlations and exhibit poor performance under correlation heterogeneity. The random-effects methods do not assume correlation homogeneity but are based on an equally unrealistic assumption that the selected studies are a random sample from a well-defined superpopulation of study populations. The random-effects methods can accommodate correlation heterogeneity, but these methods do not perform properly in typical applications where the studies are nonrandomly selected. A new fixed-effects meta-analytic confidence interval for bivariate correlations is proposed that is easy to compute and performs well under correlation heterogeneity and nonrandomly selected studies. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
The conventional fixed-effects (FE) and random-effects (RE) confidence intervals that are used to assess the average alpha reliability across multiple studies have serious limitations. The FE method, which is based on a constant coefficient model, assumes equal reliability coefficients across studies and breaks down under minor violations of this assumption. The RE method, which is based on a random coefficient model, assumes that the selected studies are a random sample from a normally distributed superpopulation. The RE method performs poorly in typical meta-analytic applications where the studies have not been randomly sampled from a normally distributed superpopulation or have been randomly sampled from a nonnormal superpopulation. A new confidence interval for the average reliability coefficient of a specific measurement scale is based on a varying coefficient statistical model and is shown to perform well under realistic conditions of reliability heterogeneity and nonrandom sampling of studies. New methods are proposed for assessing reliability moderator effects. The proposed methods are especially useful in meta-analyses that involve a small number of carefully selected studies for the purpose of obtaining a more accurate reliability estimate or to detect factors that moderate the reliability of a scale. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
4.
The growing popularity of meta-analysis has focused increased attention on the statistical models analysts are using and the assumptions underlying these models. Although comparisons often have been limited to fixed-effects (FE) models, recently there has been a call to investigate the differences between FE and random-effects (RE) models, differences that may have substantial theoretical and applied implications (National Research Council, 1992). Three FE models (including L. V. Hedges & I. Olkin's, 1985, and R. Rosenthal's, 1991, tests) and 2 RE models were applied to simulated correlation data in tests for moderator effects. The FE models seriously underestimated and the RE models greatly overestimated sampling error variance when their basic assumptions were violated, which caused biased confidence intervals and hypothesis tests. The implications of these and other findings are discussed as are methodological issues concerning meta-analyses. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
One of the most frequently cited reasons for conducting a meta-analysis is the increase in statistical power that it affords a reviewer. This article demonstrates that fixed-effects meta-analysis increases statistical power by reducing the standard error of the weighted average effect size (T?.) and, in so doing, shrinks the confidence interval around T?.. Small confidence intervals make it more likely for reviewers to detect nonzero population effects, thereby increasing statistical power. Smaller confidence intervals also represent increased precision of the estimated population effect size. Computational examples are provided for 3 effect-size indices: d (standardized mean difference), Pearson's r, and odds ratios. Random-effects meta-analyses also may show increased statistical power and a smaller standard error of the weighted average effect size. However, the authors demonstrate that increasing the number of studies in a random-effects meta-analysis does not always increase statistical power. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
The increased use of effect sizes in single studies and meta-analyses raises new questions about statistical inference. Choice of an effect-size index can have a substantial impact on the interpretation of findings. The authors demonstrate the issue by focusing on two popular effect-size measures, the correlation coefficient and the standardized mean difference (e.g., Cohen's d or Hedges's g), both of which can be used when one variable is dichotomous and the other is quantitative. Although the indices are often practically interchangeable, differences in sensitivity to the base rate or variance of the dichotomous variable can alter conclusions about the magnitude of an effect depending on which statistic is used. Because neither statistic is universally superior, researchers should explicitly consider the importance of base rates to formulate correct inferences and justify the selection of a primary effect-size statistic. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Compromised neurocognition is a core feature of schizophrenia. Following Heinrichs and Zakzanis’s (1998) seminal meta-analysis of middle-aged and predominantly chronic schizophrenia samples, the aim of this study is to provide a meta-analysis of neurocognitive findings from 47 studies of first-episode (FE) schizophrenia published through October 2007. The meta-analysis uses 43 separate samples of 2,204 FE patients with a mean age of 25.5 and 2,775 largely age- and gender-matched control participants. FE samples demonstrated medium-to-large impairments across 10 neurocognitive domains (mean effect sizes from ?0.64 to ?1.20). Findings indicate that impairments are reliably and broadly present by the FE, approach or match the degree of deficit shown in well-established illness, and are maximal in immediate verbal memory and processing speed. Larger IQ impairments in the FE compared to the premorbid period, but comparable to later phases of illness suggests deterioration between premorbid and FE phases followed by deficit stability at the group level. Considerable heterogeneity of effect sizes across studies, however, underscores variability in manifestations of the illness and a need for improved reporting of sample characteristics to support moderator variable analyses. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
This article describes what should typically be included in the introduction, method, results, and discussion sections of a meta-analytic review. Method sections include information on literature searches, criteria for inclusion of studies, and a listing of the characteristics recorded for each study. Results sections include information describing the distribution of obtained effect sizes, central tendencies, variability, tests of significance, confidence intervals, tests for heterogeneity, and contrasts (univariate or multivariate). The interpretation of meta-analytic results is often facilitated by the inclusion of the binomial effect size display procedure, the coefficient of robustness, file drawer analysis, and, where overall results are not significant, the counternull value of the obtained effect size and power analysis. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Recent interest in quantitative research synthesis has led to the development of rigorous statistical theory for some of the methods used in meta-analysis. Statistical theory proposed previously has stressed the estimation of fixed but unknown population effect sizes (standardized mean differences). Theoretical considerations often suggest that treatment effects are not fixed but vary across different implementations of a treatment. The present author presents a random effects model (analogous to random effects ANOVA) in which the population effect sizes are not fixed but are sample realizations from a distribution of possible population effect sizes. An analogy to variance component estimation is used to derive an unbiased estimator of the variance of the effect-size distribution. An example shows that these methods may suggest insights that are not available from inspection of means and standard deviation of effect-size estimates. (13 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Assesses H. M. Cooper's (see record 1980-20979-001) attempt to demonstrate the superiority of a meta-analytic (statistical) to a literary (nonstatistical) approach to evaluating the import of a collection of tests of the same null hypothesis. Cooper performed a meta-analysis of the findings on sex differences in conformity behavior reported in 2 literary reviews—E. E. Maccoby and C. N. Jacklin (1975) and A. H. Eagly (see record 1979-23638-001). The present author criticizes Cooper's analysis for (a) having a statistical error that is instrumental in drawing 1 of 2 conclusions that conflict with those of the literary analysts, (b) its choice of effect-size indices, and (c) most importantly, its treatment of effect-size data. It is contended that contrary to his own injunction, Cooper almost totally failed to take effect-size data into account in his analysis. Effect-size analyses performed by the present author dispute the 2 conclusions of Cooper's that are at odds with those of the literary analysts. (6 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Missing effect-size estimates pose a particularly difficult problem in meta-analysis. Rather than discarding studies with missing effect-size estimates or setting missing effect-size estimates equal to 0, the meta-analyst can supplement effect-size procedures with vote-counting procedures if the studies report the direction of results or the statistical significance of results. By combining effect-size and vote-counting procedures, the meta-analyst can obtain a less biased estimate of the population effect size and a narrower confidence interval for the population effect size. This article describes 3 vote-counting procedures for estimating the population correlation coefficient in studies with missing sample correlations. Easy-to-use tables, based on equal sample sizes, are presented for the 3 procedures. More complicated vote-counting procedures also are given for unequal sample sizes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
An approach to sample size planning for multiple regression is presented that emphasizes accuracy in parameter estimation (AIPE). The AIPE approach yields precise estimates of population parameters by providing necessary sample sizes in order for the likely widths of confidence intervals to be sufficiently narrow. One AIPE method yields a sample size such that the expected width of the confidence interval around the standardized population regression coefficient is equal to the width specified. An enhanced formulation ensures, with some stipulated probability, that the width of the confidence interval will be no larger than the width specified. Issues involving standardized regression coefficients and random predictors are discussed, as are the philosophical differences between AIPE and the power analytic approaches to sample size planning. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
It is very common to find meta-analyses in which some of the studies compare 2 groups on continuous dependent variables and others compare groups on dichotomized variables. Integrating all of them in a meta-analysis requires an effect-size index in the same metric that can be applied to both types of outcomes. In this article, the performance in terms of bias and sampling variance of 7 different effect-size indices for estimating the population standardized mean difference from a 2 × 2 table is examined by Monte Carlo simulation, assuming normal and nonnormal distributions. The results show good performance for 2 indices, one based on the probit transformation and the other based on the logistic distribution. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Missing effect-size estimates pose a difficult problem in meta-analysis. Conventional procedures for dealing with this problem include discarding studies with missing estimates and imputing single values for missing estimates (e.g., 0, mean). An alternative procedure, which combines effect-size estimates and vote counts, is proposed for handling missing estimates. The combined estimator has several desirable features: (a) It uses all the information available from studies in a research synthesis, (b) it is consistent, (c) it is more efficient than other estimators, (d) it has known variance, and (e) it gives weight to all studies proportional to the Fisher information they provide. The combined procedure is the method of choice in a research synthesis when some studies do not provide enough information to compute effect-size estimates but do provide information about the direction or statistical significance of results. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
The authors conducted a meta-analytic review to assess the prevalence of major depressive disorder and depressive symptoms among Latinos compared with non-Latino Whites in the United States using community-based data. Random-effects estimates were calculated for 8 studies meeting inclusion criteria that reported lifetime prevalence of major depressive disorder (combined N = 76,270) and for 23 studies meeting inclusion criteria that reported current prevalence of depressive symptoms (combined N = 38,997). Findings did not indicate a group difference in lifetime prevalence of major depressive disorder (odds ratio = 0.89, 95% confidence interval = 0.72, 1.10). Latinos reported more depressive symptoms than non-Latino Whites (standardized mean difference = 0.19, 95% confidence interval = 0.12, 0.25); however, this effect was small and does not appear to suggest a clinically meaningful preponderance of depressive symptoms among Latinos. Findings are examined in the context of theories on vulnerability and resilience, and recommendations for future research are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
L. Wilkinson and the Task Force on Statistical Inference (1999) recommended reporting confidence intervals for measures of effect sizes. If the sample size is too small, the confidence interval may be too wide to provide meaningful information. Recently, K. Kelley and J. R. Rausch (2006) used an iterative approach to computer-generate tables of sample size requirements for a standardized difference between 2 means in between-subjects designs. Sample size formulas are derived here for general standardized linear contrasts of k ≥ 2 means for both between-subjects designs and within-subjects designs. Special sample size formulas also are derived for the standardizer proposed by G. V. Glass (1976). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Most psychology journals now require authors to report a sample value of effect size along with hypothesis testing results. The sample effect size value can be misleading because it contains sampling error. Authors often incorrectly interpret the sample effect size as if it were the population effect size. A simple solution to this problem is to report a confidence interval for the population value of the effect size. Standardized linear contrasts of means are useful measures of effect size in a wide variety of research applications. New confidence intervals for standardized linear contrasts of means are developed and may be applied to between-subjects designs, within-subjects designs, or mixed designs. The proposed confidence interval methods are easy to compute, do not require equal population variances, and perform better than the currently available methods when the population variances are not equal. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Indices of positive and negative agreement for observer reliability studies, in which neither observer can be regarded as the standard, have been proposed. In this article, it is demonstrated by means of an example and a small simulation study that a recently published method for constructing confidence intervals for these indices leads to intervals that are too wide. Appropriate asymptotic (i.e., large sample) variance estimates and confidence intervals for the positive and negative agreement indices are presented and compared with bootstrap confidence intervals. We also discuss an alternative method of interval estimation motivated from a Bayesian viewpoint. The asymptotic intervals performed adequately for sample sizes of 200 or more. For smaller samples, alternative confidence intervals such as bootstrap intervals or Bayesian intervals should be considered.  相似文献   

19.
This article compiles results from a century of social psychological research, more than 25,000 studies of 8 million people. A large number of social psychological conclusions are listed alongside meta-analytic information about the magnitude and variability of the corresponding effects. References to 322 meta-analyses of social psychological phenomena are presented, as well as statistical effect-size summaries. Analyses reveal that social psychological effects typically yield a value of r equal to .21 and that, in the typical research literature, effects vary from study to study in ways that produce a standard deviation in r of .15. Uses, limitations, and implications of this large-scale compilation are noted. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Confidence intervals are widely accepted as a preferred way to present study results. They encompass significance tests and provide an estimate of the magnitude of the effect. However, comparisons of correlations still rely heavily on significance testing. The persistence of this practice is caused primarily by the lack of simple yet accurate procedures that can maintain coverage at the nominal level in a nonlopsided manner. The purpose of this article is to present a general approach to constructing approximate confidence intervals for differences between (a) 2 independent correlations, (b) 2 overlapping correlations, (c) 2 nonoverlapping correlations, and (d) 2 independent R2s. The distinctive feature of this approach is its acknowledgment of the asymmetry of sampling distributions for single correlations. This approach requires only the availability of confidence limits for the separate correlations and, for correlated correlations, a method for taking into account the dependency between correlations. These closed-form procedures are shown by simulation studies to provide very satisfactory results in small to moderate sample sizes. The proposed approach is illustrated with worked examples. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号