首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When several clinical trials report multiple outcomes, meta-analyses ordinarily analyse each outcome separately. Instead, by applying generalized-least-squares (GLS) regression, Raudenbush et al. showed how to analyse the multiple outcomes jointly in a single model. A variant of their GLS approach, discussed here, can incorporate correlations among the outcomes within treatment groups and thus provide more accurate estimates. Also, it facilitates adjustment for covariates. In our approach, each study need not report all outcomes nor evaluate all treatments. For example, a meta-analysis may evaluate two or more treatments (one 'treatment' may be a control) and include all randomized controlled trials that report on any subset (of one or more) of the treatments of interest. The analysis omits other treatments that these trials evaluated but that are not of interest to the meta-analyst. In the proposed fixed-effects GLS regression model, study-level and treatment-arm-level covariates may be predictors of one or more of the outcomes. An analysis of rheumatoid arthritis data from trials of second-line drug treatments (used after initial standard therapies prove unsatisfactory for a patient) motivates and applies the method. Data from 44 randomized controlled trials were used to evaluate the effectiveness of injectable gold and auranofin on the three outcomes tender joint count, grip strength, and erythrocyte sedimentation rate. The covariates in the regression model were quality and duration of trial and baseline measures of the patients' disease severity and disease activity in each trial. The meta-analysis found that gold was significantly more effective than auranofin on all three treatment outcomes. For all estimated coefficients, the multiple-outcomes model produced moderate changes in their values and slightly smaller standard errors, to the three separate outcome models.  相似文献   

2.
Random-effects regression models have become increasingly popular for analysis of longitudinal data. A key advantage of the random-effects approach is that it can be applied when subjects are not measured at the same number of timepoints. In this article we describe use of random-effects pattern-mixture models to further handle and describe the influence of missing data in longitudinal studies. For this approach, subjects are first divided into groups depending on their missing-data pattern and then variables based on these groups are used as model covariates. In this way, researchers are able to examine the effect of missing-data patterns on the outcome (or outcomes) of interest. Furthermore, overall estimates can be obtained by averaging over the missing-data patterns. A psychiatric clinical trials data set is used to illustrate the random-effects pattern-mixture approach to longitudinal data analysis with missing data. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
The last-observation-carried-forward imputation method is commonly used for imputting data missing due to dropouts in longitudinal clinical trials. The method assumes that outcome remains constant at the last observed value after dropout, which is unlikely in many clinical trials. Recently, random-effects regression models have become popular for analysis of longitudinal clinical trial data with dropouts. However, inference obtained from random-effects regression models is valid when the missing-at-random dropout process is present. The random-effects pattern-mixture model, on the other hand, provides an approach that is valid under more general missingness mechanisms. In this article we describe the use of random-effects pattern-mixture models under different patterns for dropouts. First, subjects are divided into groups depending on their missing-data patterns, and then model parameters are estimated for each pattern. Finally, overall estimates are obtained by averaging over the missing-data patterns and corresponding standard errors are obtained using the delta method. A typical longitudinal clinical trial data set is used to illustrate and compare the above methods of data analyses in the presence of missing data due to dropouts.  相似文献   

4.
In situations in which one cannot specify a single primary outcome, epidemiologic analyses often examine multiple associations between outcomes and explanatory covariates or risk factors. To compare alternative approaches to the analysis of multiple outcomes in regression models, I used generalized estimating equations (GEE) models, a multivariate extension of generalized linear models, to incorporate the dependence among the outcomes from the same subject and to provide robust variance estimates of the regression coefficients. I applied the methods in a hospital-population-based study of complications of surgical anaesthesia, using GEE model fitting and quasi-likelihood score and Wald tests. In one GEE model specification, I allowed the associations between each of the outcomes and a covariate to differ, yielding a regression coefficient for each of the outcome and covariate combinations; I obtained the covariances among the set of outcome-specific regression coefficients for each covariate from the robust 'sandwich' variance estimator. To address the problem of multiple inference, I used simultaneous methods that make adjustments to the test statistic p-values and the confidence interval widths, to control type I error and simultaneous coverage, respectively. In a second model specification, for each of the covariates I assumed a common association between the outcomes and the covariate, which eliminates the problem of multiplicity by use of a global test of association. In an alternative approach to multiplicity, I used empirical Bayes methods to shrink the outcome-specific coefficients toward a pooled mean that is similar to the common effect coefficient. GEE regression models can provide a flexible framework for estimation and testing of multiple outcomes.  相似文献   

5.
The fixed-effects (FE) meta-analytic confidence intervals for unstandardized and standardized mean differences are based on an unrealistic assumption of effect-size homogeneity and perform poorly when this assumption is violated. The random-effects (RE) meta-analytic confidence intervals are based on an unrealistic assumption that the selected studies represent a random sample from a large superpopulation of studies. The RE approach cannot be justified in typical meta-analysis applications in which studies are nonrandomly selected. New FE meta-analytic confidence intervals for unstandardized and standardized mean differences are proposed that are easy to compute and perform properly under effect-size heterogeneity and nonrandomly selected studies. The proposed meta-analytic confidence intervals may be used to combine unstandardized or standardized mean differences from studies having either independent samples or dependent samples and may also be used to integrate results from previous studies into a new study. An alternative approach to assessing effect-size heterogeneity is presented. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
There exists a variety of situations in which a random effects meta-analysis might be undertaken using a small number of clinical trials. A problem associated with small meta-analyses is estimating the heterogeneity between trials. To overcome this problem, information from other related studies may be incorporated into the meta-analysis. A Bayesian approach to this problem is presented using data from previous meta-analyses in the same therapeutic area to formulate a prior distribution for the heterogeneity. The treatment difference parameters are given non-informative priors. Further, related trials which compare one or other of the treatments of interest with a common third treatment are included in the model to improve inference on both the heterogeneity and the treatment difference. Two approaches to estimating relative efficacy are considered, namely a general parametric approach and a method explicit to binary data. The methodology is illustrated using data from 26 clinical trials which investigate the prevention of cirrhosis using beta-blockers and sclerotherapy. Both sources of external information lead to more precise posterior distributions for all parameters, in particular that representing heterogeneity.  相似文献   

7.
Quantitative synthesis in systematic reviews   总被引:4,自引:0,他引:4  
The final common pathway for most systematic reviews is a statistical summary of the data, or meta-analysis. The complex methods used in meta-analyses should always be complemented by clinical acumen and common sense in designing the protocol of a systematic review, deciding which data can be combined, and determining whether data should be combined. Both continuous and binary data can be pooled. Most meta-analyses summarize data from randomized trials, but other applications, such as the evaluation of diagnostic test performance and observational studies, have also been developed. The statistical methods of meta-analysis aim at evaluating the diversity (heterogeneity) among the results of different studies, exploring and explaining observed heterogeneity, and estimating a common pooled effect with increased precision. Fixed-effects models assume that an intervention has a single true effect, whereas random-effects models assume that an effect may vary across studies. Meta-regression analyses, by using each study rather than each patient as a unit of observation, can help to evaluate the effect of individual variables on the magnitude of an observed effect and thus may sometimes explain why study results differ. It is also important to assess the robustness of conclusions through sensitivity analyses and a formal evaluation of potential sources of bias, including publication bias and the effect of the quality of the studies on the observed effect.  相似文献   

8.
Several approaches have been proposed to model binary outcomes that arise from longitudinal studies. Most of the approaches can be grouped into two classes: the population-averaged and subject-specific approaches. The generalized estimating equations (GEE) method is commonly used to estimate population-averaged effects, while random-effects logistic models can be used to estimate subject-specific effects. However, it is not clear to many epidemiologists how these two methods relate to one another or how these methods relate to more traditional stratified analysis and standard logistic models. The authors address these issues in the context of a longitudinal smoking prevention trial, the Midwestern Prevention Project. In particular, the authors compare results from stratified analysis, standard logistic models, conditional logistic models, the GEE models, and random-effects models by analyzing a binary outcome from two and seven repeated measurements, respectively. In the comparison, the authors focus on the interpretation of both time-varying and time-invariant covariates under different models. Implications of these methods for epidemiologic research are discussed.  相似文献   

9.
Publication bias, sometimes known as the "file-drawer problem" or "funnel-plot asymmetry," is common in empirical research. The authors review the implications of publication bias for quantitative research synthesis (meta-analysis) and describe existing techniques for detecting and correcting it. A new approach is proposed that is suitable for application to meta-analytic data sets that are too small for the application of existing methods. The model estimates parameters relevant to fixed-effects, mixed-effects or random-effects meta-analysis contingent on a hypothetical pattern of bias that is fixed independently of the data. The authors illustrate this approach for sensitivity analysis using 3 data sets adapted from a commonly cited reference work on research synthesis (H. M. Cooper & L. V. Hedges, 1994). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
The currently available meta-analytic methods for correlations have restrictive assumptions. The fixed-effects methods assume equal population correlations and exhibit poor performance under correlation heterogeneity. The random-effects methods do not assume correlation homogeneity but are based on an equally unrealistic assumption that the selected studies are a random sample from a well-defined superpopulation of study populations. The random-effects methods can accommodate correlation heterogeneity, but these methods do not perform properly in typical applications where the studies are nonrandomly selected. A new fixed-effects meta-analytic confidence interval for bivariate correlations is proposed that is easy to compute and performs well under correlation heterogeneity and nonrandomly selected studies. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Meta-analysis and structural equation modeling (SEM) are two important statistical methods in the behavioral, social, and medical sciences. They are generally treated as two unrelated topics in the literature. The present article proposes a model to integrate fixed-, random-, and mixed-effects meta-analyses into the SEM framework. By applying an appropriate transformation on the data, studies in a meta-analysis can be analyzed as subjects in a structural equation model. This article also highlights some practical benefits of using the SEM approach to conduct a meta-analysis. Specifically, the SEM-based meta-analysis can be used to handle missing covariates, to quantify the heterogeneity of effect sizes, and to address the heterogeneity of effect sizes with mixture models. Examples are used to illustrate the equivalence between the conventional meta-analysis and the SEM-based meta-analysis. Future directions on and issues related to the SEM-based meta-analysis are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Multiple regression models are commonly used to control for confounding in epidemiologic research. Parametric regression models, such as multiple logistic regression, are powerful tools to control for multiple covariates provided that the covariate-risk associations are correctly specified. Residual confounding may result, however, from inappropriate specification of the confounder-risk association. In this paper, we illustrate the order of magnitude of residual confounding that may occur with traditional approaches to control for continuous confounders in multiple logistic regression, such as inclusion of a single linear term or categorization of the confounder, under a variety of assumptions on the confounder-risk association. We show that inclusion of the confounder as a single linear term often provides satisfactory control for confounding even in situations in which the model assumptions are clearly violated. In contrast, categorization of the confounder may often lead to serious residual confounding if the number of categories is small. Alternative strategies to control for confounding, such as polynomial regression or linear spline regression, are a useful supplement to the more traditional approaches.  相似文献   

13.
The population risk, for example the control group mortality rate, is an aggregate measurement of many important attributes of a clinical trial, such as the general health of the patients treated and the experience of the staff performing the trial. Plotting measurements of the population risk against the treatment effect estimates for a group of clinical trials may reveal an apparent association, suggesting that differences in the population risk might explain heterogeneity in the results of clinical trials. In this paper we consider using estimates of population risk to explain treatment effect heterogeneity, and show that using these estimates as fixed covariates will result in bias. This bias depends on the treatment effect and population risk definitions chosen, and the magnitude of measurement errors. To account for the effect of measurement error, we represent clinical trials in a bivariate two-level hierarchical model, and show how to estimate the parameters of the model by both maximum likelihood and Bayes procedures. We use two examples to demonstrate the method.  相似文献   

14.
The growing popularity of meta-analysis has focused increased attention on the statistical models analysts are using and the assumptions underlying these models. Although comparisons often have been limited to fixed-effects (FE) models, recently there has been a call to investigate the differences between FE and random-effects (RE) models, differences that may have substantial theoretical and applied implications (National Research Council, 1992). Three FE models (including L. V. Hedges & I. Olkin's, 1985, and R. Rosenthal's, 1991, tests) and 2 RE models were applied to simulated correlation data in tests for moderator effects. The FE models seriously underestimated and the RE models greatly overestimated sampling error variance when their basic assumptions were violated, which caused biased confidence intervals and hypothesis tests. The implications of these and other findings are discussed as are methodological issues concerning meta-analyses. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Examines relationships among 3 ANOVA measures of association—eta squared, epsilon squared, and omega squared. The rationale for each measure is developed within the fixed-effects ANOVA model, and the measures are related to corresponding measures of association in the regression model. Special attention is paid to the conceptual distinction between measures of association in fixed- vs random-effects designs. Limitations of these measures in fixed-effects designs are discussed, and recommendations for usage are provided. (43 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
There has recently been disagreement in the literature on the results and interpretation of meta-analyses of the trials of serum cholesterol reduction, both in terms of the quantification of the effect on ischaemic heart disease and as regards the evidence of any adverse effect on other causes of death. This paper describes statistical aspects of a recent meta-analysis of these trials, and draws some more general conclusions about the methods used in meta-analysis. Tests of an overall null hypothesis are shown to have a basis clearly distinct from the more extensive assumptions needed to provide an overall estimate of effect. The fixed effect approach to estimation relies on the implausible assumption of homogeneity of treatment effects across the trials, and is therefore likely to yield confidence intervals which are too narrow and conclusions which are too dogmatic. However the conventional random effects method relies on its own set of unrealistic assumptions, and cannot be regarded as a robust solution to the problem of statistical heterogeneity. The random effects method is more usefully regarded as a type of sensitivity analysis in which the weights allocated to each study in estimating the overall effect are modified. However, rather than using a statistical model for the 'unexplained' heterogeneity, greater insight and scientific understanding of the results of a set of trials may be obtained by a careful exploration of potential sources of heterogeneity. In the context of the cholesterol trials, the heterogeneity according to the extent and duration of cholesterol reduction are of prime concern and are investigated using logistic regression. It is concluded that the long-term benefits of serum cholesterol reduction on the risk of heart disease have been seriously underestimated in some previous meta-analyses, while the evidence for adverse effects on other causes of death have been misleadingly exaggerated.  相似文献   

17.
One conceptualization of meta-analysis is that studies within the meta-analysis are sampled from populations with mean effect sizes that vary (random-effects models). The consequences of not applying such models and the comparison of different methods have been hotly debated. A Monte Carlo study compared the efficacy of Hedges and Vevea's random-effects methods of meta-analysis with Hunter and Schmidt's, over a wide range of conditions, as the variability in population correlations increases. (a) The Hunter-Schmidt method produced estimates of the average correlation with the least error, although estimates from both methods were very accurate; (b) confidence intervals from Hunter and Schmidt's method were always slightly too narrow but became more accurate than those from Hedges and Vevea's method as the number of studies included in the meta-analysis, the size of the true correlation, and the variability of correlations increased; and (c) the study weights did not explain the differences between the methods. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
One of the most frequently cited reasons for conducting a meta-analysis is the increase in statistical power that it affords a reviewer. This article demonstrates that fixed-effects meta-analysis increases statistical power by reducing the standard error of the weighted average effect size (T?.) and, in so doing, shrinks the confidence interval around T?.. Small confidence intervals make it more likely for reviewers to detect nonzero population effects, thereby increasing statistical power. Smaller confidence intervals also represent increased precision of the estimated population effect size. Computational examples are provided for 3 effect-size indices: d (standardized mean difference), Pearson's r, and odds ratios. Random-effects meta-analyses also may show increased statistical power and a smaller standard error of the weighted average effect size. However, the authors demonstrate that increasing the number of studies in a random-effects meta-analysis does not always increase statistical power. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Sources of population heterogeneity may or may not be observed. If the sources of heterogeneity are observed (e.g., gender), the sample can be split into groups and the data analyzed with methods for multiple groups. If the sources of population heterogeneity are unobserved, the data can be analyzed with latent class models. Factor mixture models are a combination of latent class and common factor models and can be used to explore unobserved population heterogeneity. Observed sources of heterogeneity can be included as covariates. The different ways to incorporate covariates correspond to different conceptual interpretations. These are discussed in detail. Characteristics of factor mixture modeling are described in comparison to other methods designed for data stemming from heterogeneous populations. A step-by-step analysis of a subset of data from the Longitudinal Survey of American Youth illustrates how factor mixture models can be applied in an exploratory fashion to data collected at a single time point. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
The conventional fixed-effects (FE) and random-effects (RE) confidence intervals that are used to assess the average alpha reliability across multiple studies have serious limitations. The FE method, which is based on a constant coefficient model, assumes equal reliability coefficients across studies and breaks down under minor violations of this assumption. The RE method, which is based on a random coefficient model, assumes that the selected studies are a random sample from a normally distributed superpopulation. The RE method performs poorly in typical meta-analytic applications where the studies have not been randomly sampled from a normally distributed superpopulation or have been randomly sampled from a nonnormal superpopulation. A new confidence interval for the average reliability coefficient of a specific measurement scale is based on a varying coefficient statistical model and is shown to perform well under realistic conditions of reliability heterogeneity and nonrandom sampling of studies. New methods are proposed for assessing reliability moderator effects. The proposed methods are especially useful in meta-analyses that involve a small number of carefully selected studies for the purpose of obtaining a more accurate reliability estimate or to detect factors that moderate the reliability of a scale. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号