首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The fixed-effects (FE) meta-analytic confidence intervals for unstandardized and standardized mean differences are based on an unrealistic assumption of effect-size homogeneity and perform poorly when this assumption is violated. The random-effects (RE) meta-analytic confidence intervals are based on an unrealistic assumption that the selected studies represent a random sample from a large superpopulation of studies. The RE approach cannot be justified in typical meta-analysis applications in which studies are nonrandomly selected. New FE meta-analytic confidence intervals for unstandardized and standardized mean differences are proposed that are easy to compute and perform properly under effect-size heterogeneity and nonrandomly selected studies. The proposed meta-analytic confidence intervals may be used to combine unstandardized or standardized mean differences from studies having either independent samples or dependent samples and may also be used to integrate results from previous studies into a new study. An alternative approach to assessing effect-size heterogeneity is presented. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
The conventional fixed-effects (FE) and random-effects (RE) confidence intervals that are used to assess the average alpha reliability across multiple studies have serious limitations. The FE method, which is based on a constant coefficient model, assumes equal reliability coefficients across studies and breaks down under minor violations of this assumption. The RE method, which is based on a random coefficient model, assumes that the selected studies are a random sample from a normally distributed superpopulation. The RE method performs poorly in typical meta-analytic applications where the studies have not been randomly sampled from a normally distributed superpopulation or have been randomly sampled from a nonnormal superpopulation. A new confidence interval for the average reliability coefficient of a specific measurement scale is based on a varying coefficient statistical model and is shown to perform well under realistic conditions of reliability heterogeneity and nonrandom sampling of studies. New methods are proposed for assessing reliability moderator effects. The proposed methods are especially useful in meta-analyses that involve a small number of carefully selected studies for the purpose of obtaining a more accurate reliability estimate or to detect factors that moderate the reliability of a scale. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Earlier work showed how to perform fixed-effects meta-analysis of studies or trials when each provides results on more than one outcome per patient and these multiple outcomes are correlated. That fixed-effects generalized-least-squares approach analyzes the multiple outcomes jointly within a single model, and it can include covariates, such as duration of therapy or quality of trial, that may explain observed heterogeneity of results among the trials. Sometimes the covariates explain all the heterogeneity, and the fixed-effects regression model is appropriate. However, unexplained heterogeneity may often remain, even after taking into account known or suspected covariates. Because fixed-effects models do not make allowance for this remaining unexplained heterogeneity, the potential exists for bias in estimated coefficients, standard errors and p-values. We propose two random-effects approaches for the regression meta-analysis of multiple correlated outcomes. We compare their use with fixed-effects models and with separate-outcomes models in a meta-analysis of periodontal clinical trials. A simulation study shows the advantages of the random-effects approach. These methods also facilitate meta-analysis of trials that compare more than two treatments.  相似文献   

4.
In 2 Monte Carlo studies of fixed- and random-effects meta-analysis for correlations, A. P. Field (2001) ostensibly evaluated Hedges–Olkin–Vevea Fisher-? and Schmidt–Hunter Pearson-r estimators and tests in 120 conditions. Some authors have cited those results as evidence not to meta-analyze Fisher-? correlations, especially with heterogeneous correlation parameters. The present attempt to replicate Field's simulations included comparisons with analytic values as well as results for efficiency and confidence-interval coverage. Field's results under homogeneity were mostly replicable, but those under heterogeneity were not: The latter exhibited up to over .17 more bias than ours and, for tests of the mean correlation and homogeneity, respectively, nonnull rejection rates up to .60 lower and .65 higher. Changes to Field's observations and conclusions are recommended, and practical guidance is offered regarding simulation evidence and choices among methods. Most cautions about poor performance of Fisher-?methods are largely unfounded, especially with a more appropriate ?-to-r transformation. The Appendix gives a computer program for obtaining Pearson-r moments from a normal Fisher-? distribution, which is used to demonstrate distortion due to direct ?-to-r transformation of a mean Fisher-? correlation. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

5.
One conceptualization of meta-analysis is that studies within the meta-analysis are sampled from populations with mean effect sizes that vary (random-effects models). The consequences of not applying such models and the comparison of different methods have been hotly debated. A Monte Carlo study compared the efficacy of Hedges and Vevea's random-effects methods of meta-analysis with Hunter and Schmidt's, over a wide range of conditions, as the variability in population correlations increases. (a) The Hunter-Schmidt method produced estimates of the average correlation with the least error, although estimates from both methods were very accurate; (b) confidence intervals from Hunter and Schmidt's method were always slightly too narrow but became more accurate than those from Hedges and Vevea's method as the number of studies included in the meta-analysis, the size of the true correlation, and the variability of correlations increased; and (c) the study weights did not explain the differences between the methods. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Publication bias, sometimes known as the "file-drawer problem" or "funnel-plot asymmetry," is common in empirical research. The authors review the implications of publication bias for quantitative research synthesis (meta-analysis) and describe existing techniques for detecting and correcting it. A new approach is proposed that is suitable for application to meta-analytic data sets that are too small for the application of existing methods. The model estimates parameters relevant to fixed-effects, mixed-effects or random-effects meta-analysis contingent on a hypothetical pattern of bias that is fixed independently of the data. The authors illustrate this approach for sensitivity analysis using 3 data sets adapted from a commonly cited reference work on research synthesis (H. M. Cooper & L. V. Hedges, 1994). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Meta-analytic procedures allow for determining best estimates of the individual-level, the within-organization, and the organizational-level population correlations. In most validity generalization work, meta-analytic procedures have been used to provide best estimates of the within-organization correlation. However, in many other organizational domains, researchers often do not clearly specify which population parameter is of interest. Further, researchers often combine studies in which data were collected at different levels of analysis or with mixed (single- and multiple-organization) sampling schemes, making it difficult to interpret unambiguously the meta-analytic ρ?. The authors focus on how to make appropriate inferences from meta-analytic studies by integrating a levels-of-analysis framework with meta-analytic techniques, highlighting how meta-analytic procedures can aid researchers in better understanding multilevel relationships among organizational constructs. The authors provide recommendations for clearer specifications of populations and levels issues in future meta-analytic studies. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Proposes a random-effects regression model for analysis of clustered data. Unlike ordinary regression analysis of clustered data, random-effects regression models do not assume that each observation is independent but do assume that data within clusters are dependent to some degree. The degree of this dependency is estimated along with estimates of the usual model parameters, thus adjusting these effects for the dependency resulting from the clustering of the data. A maximum marginal likelihood solution is described, and available statistical software for the model is discussed. An analysis of a dataset in which students are clustered within classrooms and schools is used to illustrate features of random-effects regression analysis, relative to both individual-level analysis that ignores the clustering of the data, and classroom-level analysis that aggregates the individual data. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
10.
Quantitative synthesis in systematic reviews   总被引:4,自引:0,他引:4  
The final common pathway for most systematic reviews is a statistical summary of the data, or meta-analysis. The complex methods used in meta-analyses should always be complemented by clinical acumen and common sense in designing the protocol of a systematic review, deciding which data can be combined, and determining whether data should be combined. Both continuous and binary data can be pooled. Most meta-analyses summarize data from randomized trials, but other applications, such as the evaluation of diagnostic test performance and observational studies, have also been developed. The statistical methods of meta-analysis aim at evaluating the diversity (heterogeneity) among the results of different studies, exploring and explaining observed heterogeneity, and estimating a common pooled effect with increased precision. Fixed-effects models assume that an intervention has a single true effect, whereas random-effects models assume that an effect may vary across studies. Meta-regression analyses, by using each study rather than each patient as a unit of observation, can help to evaluate the effect of individual variables on the magnitude of an observed effect and thus may sometimes explain why study results differ. It is also important to assess the robustness of conclusions through sensitivity analyses and a formal evaluation of potential sources of bias, including publication bias and the effect of the quality of the studies on the observed effect.  相似文献   

11.
Three methods of synthesizing correlations for meta-analytic structural equation modeling (SEM) under different degrees and mechanisms of missingness were compared for the estimation of correlation and SEM parameters and goodness-of-fit indices by using Monte Carlo simulation techniques. A revised generalized least squares (GLS) method for synthesizing correlations, weighted-covariance GLS (W-COV GLS), was compared with univariate weighting with untransformed correlations (univariate r) and univariate weighting with Fisher's z-transformed correlations (univariate z). These 3 methods were crossed with listwise and pairwise deletion. Univariate z and W-COV GLS performed similarly, with W-COV GLS providing slightly better estimation of parameters and more correct model rejection rates. Missing not at random data produced high levels of relative bias in correlation and model parameter estimates and higher incorrect SEM model rejection rates. Pairwise deletion resulted in inflated standard errors for all synthesis methods and higher incorrect rejection rates for the SEM model with univariate weighting procedures. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Provides simple but accurate methods for comparing correlation coefficients between a dependent variable and a set of independent variables. The methods are simple extensions of O. J. Dunn and V. A. Clark's (1969) work using the Fisher z transformation and include a test and confidence interval for comparing 2 correlated correlations, a test for heterogeneity, and a test and confidence interval for a contrast among k (>2) correlated correlations. Also briefly discussed is why the traditional Hotelling's t test for comparing correlations is generally not appropriate in practice. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Two previous meta-analyses concluded that average validity coefficients for the Rorschach and the MMPI have similar magnitudes (L. Atkinson, 1986; K. C. H. Parker et al, see record 1989-14153-001), but methodological problems in both meta-analyses may have impeded acceptance of these results (H. N. Garb et al, see record 1998-11225-011). We conducted a new meta-analysis comparing criterion-related validity evidence for the Rorschach and the MMPI. The unweighted mean validity coefficients (r?s) were .30 for MMPI and .29 for Rorschach, and they were not reliably different (p = .76 under fixed-effects model, p = .89 under random-effects model). The MMPI had larger validity coefficients than the Rorschach for studies using psychiatric diagnoses and self-report measures as criterion variables, whereas the Rorschach had larger validity coefficients than the MMPI for studies using objective criterion variables. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
To synthesize studies that use structural equation modeling (SEM), researchers usually use Pearson correlations (univariate r), Fisher z scores (univariate z), or generalized least squares (GLS) to combine the correlation matrices. The pooled correlation matrix is then analyzed by the use of SEM. Questionable inferences may occur for these ad hoc procedures. A 2-stage structural equation modeling (TSSEM) method is proposed to incorporate meta-analytic techniques and SEM into a unified framework. Simulation results reveal that the univariate-r, univariate-z, and TSSEM methods perform well in testing the homogeneity of correlation matrices and estimating the pooled correlation matrix. When fitting SEM, only TSSEM works well. The GLS method performed poorly in small to medium samples. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
A confidence interval (CI) for a population predictor weight for use with N. Cliff's (1994) method of ordinal multiple regression (OMR) is presented. The OMR CI is based on an estimated standard error of a weight derived from a fixed-effects model. A simulation was performed to examine the sampling properties of the OMR CI. The results show that the OMR CI had good Type I error rate and coverage. The OMR CI had lower power than the least-squares multiple regression (LSMR) CI when predictors were not correlated but had higher power when predictor correlations were moderate to high. In addition to discussing the simulation results, it is pointed out that the OMR CI can have superior sampling properties when the fixed-effects assumptions are violated. The OMR CI is recommended when a researcher wants to consider only ordinal information in multivariate prediction, when predictor correlations are moderate to high, and when the assumptions of fixed-effects LSMR are violated. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
We examined a longstanding assumption in vocational psychology that people–things and data–ideas are bipolar dimensions. Two minimal criteria for bipolarity were proposed and examined across 3 studies: (a) The correlation between opposite interest types should be negative; (b) after correcting for systematic responding, the correlation should be greater than ?.40. In Study 1, a meta-analysis using 26 interest inventories with a sample size of 1,008,253 participants showed that meta-analytic correlations between opposite RIASEC (realistic, investigative, artistic, social, enterprising, conventional) types ranged from ?.03 to .18 (corrected meta-analytic correlations ranged from ?.23 to ?.06). In Study 2, structural equation models (SEMs) were fit to the Interest Finder (IF; Wall, Wise, & Baker, 1996) and the Interest Profiler (IP; Rounds, Smith, Hubert, Lewis, & Rivkin, 1999) with sample sizes of 13,939 and 1,061, respectively. The correlations of opposite RIASEC types were positive, ranging from .17 to .53. No corrected correlation met the criterion of ?.40 except for investigative–enterprising (r = ?.67). Nevertheless, a direct estimate of the correlation between data–ideas end poles using targeted factor rotation did not reveal bipolarity. Furthermore, bipolar SEMs fit substantially worse than a multiple-factor representation of vocational interests. In Study 3, a two-way clustering solution on IF and IP respondents and items revealed a substantial number of individuals with interests in both people and things. We discuss key theoretical, methodological, and practical implications such as the structure of vocational interests, interpretation and scoring of interest measures for career counseling, and expert RIASEC ratings of occupations. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

17.
Examines relationships among 3 ANOVA measures of association—eta squared, epsilon squared, and omega squared. The rationale for each measure is developed within the fixed-effects ANOVA model, and the measures are related to corresponding measures of association in the regression model. Special attention is paid to the conceptual distinction between measures of association in fixed- vs random-effects designs. Limitations of these measures in fixed-effects designs are discussed, and recommendations for usage are provided. (43 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Empirical Bayes meta-analysis provides a useful framework for examining test validation. The fixed-effects case in which ρ has a single value corresponds to the inference that the situational specificity hypothesis can be rejected in a validity generalization study. A Bayesian analysis of such a case provides a simple and powerful test of ρ?=?0; such a test has practical implications for significance testing in test validation. The random-effects case in which ?2 ρ ?>?0 provides an explicit method with which to assess the relative importance of local validity studies and previous meta-analyses. Simulated data are used to illustrate both cases. Results of published meta-analyses are used to show that local validation becomes increasingly important as ?2 ρ increases. The meaning of the term validity generalization is explored, and the problem of what can be inferred about test transportability in the random-effects case is described. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
The behavior of the L. V. Hedges's (see record 1983-00213-001) Q test for the fixed-effects meta-analytic model was investigated for small and unequal study sample sizes paired with larger numbers of studies, nonnormal score distributions, and unequal variances. The results of a Monte Carlo study indicate that the hypothesis of equal effect sizes tends to be rejected less than expected if smaller study sample sizes are paired with larger numbers of studies; pairing smaller variances with larger sample sizes (or vice versa) leads to this hypothesis being rejected more than expected. The power of the Q test is also less than expected when small study sample sizes are paired with larger numbers of studies. These findings suggest conditions for which the Q test should be used cautiously. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Calculations of the power of statistical tests are important in planning research studies (including meta-analyses) and in interpreting situations in which a result has not proven to be statistically significant. The authors describe procedures to compute statistical power of fixed- and random-effects tests of the mean effect size, tests for heterogeneity (or variation) of effect size parameters across studies, and tests for contrasts among effect sizes of different studies. Examples are given using 2 published meta-analyses. The examples illustrate that statistical power is not always high in meta-analysis. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号