首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
There has recently been disagreement in the literature on the results and interpretation of meta-analyses of the trials of serum cholesterol reduction, both in terms of the quantification of the effect on ischaemic heart disease and as regards the evidence of any adverse effect on other causes of death. This paper describes statistical aspects of a recent meta-analysis of these trials, and draws some more general conclusions about the methods used in meta-analysis. Tests of an overall null hypothesis are shown to have a basis clearly distinct from the more extensive assumptions needed to provide an overall estimate of effect. The fixed effect approach to estimation relies on the implausible assumption of homogeneity of treatment effects across the trials, and is therefore likely to yield confidence intervals which are too narrow and conclusions which are too dogmatic. However the conventional random effects method relies on its own set of unrealistic assumptions, and cannot be regarded as a robust solution to the problem of statistical heterogeneity. The random effects method is more usefully regarded as a type of sensitivity analysis in which the weights allocated to each study in estimating the overall effect are modified. However, rather than using a statistical model for the 'unexplained' heterogeneity, greater insight and scientific understanding of the results of a set of trials may be obtained by a careful exploration of potential sources of heterogeneity. In the context of the cholesterol trials, the heterogeneity according to the extent and duration of cholesterol reduction are of prime concern and are investigated using logistic regression. It is concluded that the long-term benefits of serum cholesterol reduction on the risk of heart disease have been seriously underestimated in some previous meta-analyses, while the evidence for adverse effects on other causes of death have been misleadingly exaggerated.  相似文献   

2.
Quantitative synthesis in systematic reviews   总被引:4,自引:0,他引:4  
The final common pathway for most systematic reviews is a statistical summary of the data, or meta-analysis. The complex methods used in meta-analyses should always be complemented by clinical acumen and common sense in designing the protocol of a systematic review, deciding which data can be combined, and determining whether data should be combined. Both continuous and binary data can be pooled. Most meta-analyses summarize data from randomized trials, but other applications, such as the evaluation of diagnostic test performance and observational studies, have also been developed. The statistical methods of meta-analysis aim at evaluating the diversity (heterogeneity) among the results of different studies, exploring and explaining observed heterogeneity, and estimating a common pooled effect with increased precision. Fixed-effects models assume that an intervention has a single true effect, whereas random-effects models assume that an effect may vary across studies. Meta-regression analyses, by using each study rather than each patient as a unit of observation, can help to evaluate the effect of individual variables on the magnitude of an observed effect and thus may sometimes explain why study results differ. It is also important to assess the robustness of conclusions through sensitivity analyses and a formal evaluation of potential sources of bias, including publication bias and the effect of the quality of the studies on the observed effect.  相似文献   

3.
Are meta-analyses the brave new world, or are the critics of such combined analyses right to say that the biases inherent in clinical trials make them uncombinable? Negative trials are often unreported, and hence can be missed by meta-analysts. And how much heterogeneity between trials is acceptable? A recent major criticism is that large randomised trials do not always agree with a prior meta-analysis. Neither individual trials nor meta-analyses, reporting as they do on population effects, tell how to treat the individual patient. Here we take a more rounded approach to meta-analyses, arguing that their strengths outweigh their weaknesses, although the latter must not be brushed aside.  相似文献   

4.
We describe a meta-analysis approach for the evaluation of a potential surrogate marker. Surrogate markers are useful in helping to identify therapeutic mechanisms of action and disease pathogenesis, and for selecting therapies to take forward from phase II to phase III clinical trials. They have also become increasingly important for regulatory purposes by providing a basis for preliminary approval of drugs pending clinical outcome studies. Methodology for evaluating surrogate markers has focused on determining the difference in the effects of two treatments on clinical outcome in an individual clinical trial, and then estimating the proportion of this difference explained by the treatment's effects on the potential marker. Studies are, however, frequently underpowered or cease before they accumulate sufficient evidence to draw strong conclusions about the value of a potential surrogate marker using this approach, and there are also some technical difficulties with the approach. Consideration of the association between the difference in treatment effects on the clinical outcome and the difference in treatment effects on the potential marker over a range of trials provides an alternative means to evaluate a potential marker. We describe a meta-analysis approach using Bayesian methods to model this association. Importantly, this approach enables one to obtain prediction intervals for the true difference in clinical outcome for a given estimated treatment difference in the effect on the potential marker. We illustrate the methodology by applying it to results from studies of the AIDS Clinical Trials Group to assess the value of CD4 T-lymphocyte cell count as a potential surrogate marker for the treatment effects on the development of AIDS or death.  相似文献   

5.
Meta-analysis and structural equation modeling (SEM) are two important statistical methods in the behavioral, social, and medical sciences. They are generally treated as two unrelated topics in the literature. The present article proposes a model to integrate fixed-, random-, and mixed-effects meta-analyses into the SEM framework. By applying an appropriate transformation on the data, studies in a meta-analysis can be analyzed as subjects in a structural equation model. This article also highlights some practical benefits of using the SEM approach to conduct a meta-analysis. Specifically, the SEM-based meta-analysis can be used to handle missing covariates, to quantify the heterogeneity of effect sizes, and to address the heterogeneity of effect sizes with mixture models. Examples are used to illustrate the equivalence between the conventional meta-analysis and the SEM-based meta-analysis. Future directions on and issues related to the SEM-based meta-analysis are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Although meta-analysis has become a widespread data-analytic strategy to review a collection of group comparison studies, meta-analyses of the results of single-case studies are relatively sparse. In this article it is argued that combining the data of individual cases, studied in different studies or in the same study, can be a meaningful and important source of information. By combining the results of individual cases, both group and individual parameters can be estimated and tested efficiently, using all data available. Moreover, the moderating effect of case or study characteristics can be explored. We (a) describe the hierarchical linear models approach to answer these general meta-analytical questions for single-case data; (b) compare the approach with the Busk and Serlin (1992) approach; (c) present hierarchical linear models that can be used in various situations for the quantitative integration of single-case data; and (d) show how the SAS software can be used for estimating the unknown parameters. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

7.
Objective: We examined four meta-analyses of behavioral interventions for adults (Dixon, Keefe, Scipio, Perri, & Abernethy, 2007; Hoffman, Papas, Chatkoff, & Kerns, 2007; Irwin, Cole, & Nicassio, 2006; and Jacobsen, Donovan, Vadaparampil, & Small, 2007) that have appeared in the Evidence Based Treatment Reviews section of Health Psychology. Design: Narrative review. Main Outcome Measures: We applied the following criteria to each meta-analysis: (1) whether each meta-analysis was described accurately, adequately, and transparently in the article; (2) whether there was an adequate attempt to deal with methodological quality of the original trials; (3) the extent to which the meta-analysis depended on small, underpowered studies; and (4) the extent to which the meta-analysis provided valid and useful evidence-based recommendations. Results: Across the four meta-analyses, we identified substantial problems with the transparency and completeness with which these meta-analyses were reported, as well as a dependence on small, underpowered trials of generally poor quality. Conclusion: Results of our exercise raise questions about the clinical validity and utility of the conclusions of these meta-analyses. Results should serve as a wake up call to prospective authors, reviewers, and end-users of meta-analyses now appearing in the literature. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
The computation of effect sizes is a key feature of meta-analysis. In treatment outcome meta-analyses, the standardized mean difference statistic on posttest scores (d) is usually the effect size statistic used. However, when primary studies do not report the statistics needed to compute d, many methods for estimating d from other data have been developed. Little is known about the accuracy of these estimates, yet meta-analysts frequently use them on the assumption that they are estimating the same population parameter as d. This study investigates that assumption empirically. On a sample of 140 psychosocial treatment or prevention studies from a variety of areas, the present study shows that these estimates yield results that are often not equivalent to d in either mean or variance. The frequent mixing of d and other estimates of d in past meta-analyses, therefore, may have led to biased effect size estimates and inaccurate significance tests. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
BACKGROUND: Epidemiologic evidence and meta-analyses of data from early clinical trials suggest that lowering the levels of cholesterol does not reduce the events of stroke. These analyses have not included more recent clinical trials using reductase inhibitors. OBJECTIVE: To conduct a meta-analysis of the effect of reducing cholesterol levels on stroke in all reported clinical trials of primary (n = 4) and secondary (n = 8) prevention of coronary heart disease that used reductase inhibitor monotherapy and provided information on incident stroke. RESULTS: Analysis of combined data from primary and secondary prevention trials showed a highly statistically significant reduction of stroke associated with the use of reductase inhibitor monotherapy (27% reduction in stroke; P = .001). Analysis of secondary prevention trials alone disclosed a similar statistically significant effect (32% reduction in stroke; P = .001). A smaller nonsignificant reduction in stroke was noted in the primary prevention trials (15% reduction in stroke; P = .48). CONCLUSIONS: Reductase inhibitors now in use for lowering cholesterol levels are more potent and have fewer side effects than the cholesterol-lowering agents previously available. They appear to reduce stroke, most notably in patients with prevalent coronary artery disease, which may be partly due to the effects of lowering the levels of cholesterol on the progression and plaque stability of extracranial carotid atherosclerosis or the marked reduction of incident coronary heart disease associated with treatment.  相似文献   

10.
In a meta-analysis of a set of clinical trials, a crucial but problematic component is providing an estimate and confidence interval for the overall treatment effect theta. Since in the presence of heterogeneity a fixed effect approach yields an artificially narrow confidence interval for theta, the random effects method of DerSimonian and Laird, which incorporates a moment estimator of the between-trial components of variance sigma B2, has been advocated. With the additional distributional assumptions of normality, a confidence interval for theta may be obtained. However, this method does not provide a confidence interval for sigma B2, nor a confidence interval for theta which takes account of the fact that sigma B2 has to be estimated from the data. We show how a likelihood based method can be used to overcome these problems, and use profile likelihoods to construct likelihood based confidence intervals. This approach yields an appropriately widened confidence interval compared with the standard random effects method. Examples of application to a published meta-analysis and a multicentre clinical trial are discussed. It is concluded that likelihood based methods are preferred to the standard method in undertaking random effects meta-analysis when the value of sigma B2 has an important effect on the overall estimated treatment effect.  相似文献   

11.
Earlier work showed how to perform fixed-effects meta-analysis of studies or trials when each provides results on more than one outcome per patient and these multiple outcomes are correlated. That fixed-effects generalized-least-squares approach analyzes the multiple outcomes jointly within a single model, and it can include covariates, such as duration of therapy or quality of trial, that may explain observed heterogeneity of results among the trials. Sometimes the covariates explain all the heterogeneity, and the fixed-effects regression model is appropriate. However, unexplained heterogeneity may often remain, even after taking into account known or suspected covariates. Because fixed-effects models do not make allowance for this remaining unexplained heterogeneity, the potential exists for bias in estimated coefficients, standard errors and p-values. We propose two random-effects approaches for the regression meta-analysis of multiple correlated outcomes. We compare their use with fixed-effects models and with separate-outcomes models in a meta-analysis of periodontal clinical trials. A simulation study shows the advantages of the random-effects approach. These methods also facilitate meta-analysis of trials that compare more than two treatments.  相似文献   

12.
A fair test of the Dodo bird conjecture that different psychotherapies are equally effective would entail separate comparisons of every pair of therapies. A meta-analysis of overall effect size for any particular set of such pairs is only relevant to the Dodo bird conjecture when the mean absolute value of differences is 0. The limitations of the underlying randomized clinical trials and the problem of uncontrolled causal variables make clinically useful treatment differences unlikely to be revealed by such heterogeneous meta-analyses. To enhance implications for practice, the authors recommend an intensified focus on patient–treatment interactions, cost-effectiveness variables, and separate meta-analyses for each pair of treatments. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
For a meta-analysis to give definitive information, it should meet at least the minimum standards that would be expected of a well-designed, adequately powered, and carefully conducted randomised controlled trial. These minimum standards include both qualitative characteristics--a prospective protocol, comparable definitions of key outcomes, quality control of data, and inclusion of all patients from all trials in the final analysis--and quantitative standards--an assessment of whether the total sample is large enough to provide reliable results and the use of appropriate statistical monitoring guidelines to indicate when the results of the accumulating data of a meta-analysis are conclusive. We believe that rigorous meta-analyses undertaken according to these principles will lead to more reliable evidence about the efficacy and safety of interventions than either retrospective meta-analysis or individual trials.  相似文献   

14.
If the control rate (CR) in a clinical trial represents the incidence or the baseline severity of illness in the study population, the size of treatment effects may tend to very with the size of control rates. To investigate this hypothesis, we examined 115 meta-analyses covering a wide range of medical applications for evidence of a linear relationship between the CR and three treatment effect (TE) measures: the risk difference (RD); the log relative risk (RR), and the log odds ratio (OR). We used a hierarchical model that estimates the true regression while accounting for the random error in the measurement of and the functional dependence between the observed TE and the CR. Using a two standard error rule of significance, we found the control rate was about two times more likely to be significantly related to the RD (31 per cent) than to the RR (13 per cent) or the OR (14 per cent). Correlations between TE and CR were more likely when the meta-analysis included 10 or more trials and if patient follow-up was less than six months and homogeneous. Use of weighted linear regression (WLR) of the observed TE on the observed CR instead of the hierarchical model underestimated standard errors and overestimated the number of significant results by a factor of two. The significant correlation between the CR and the TE suggests that, rather than merely pooling the TE into a single summary estimate, investigators should search for the causes of heterogeneity related to patient characteristics and treatment protocols to determine when treatment is most beneficial and that they should plan to study this heterogeneity in clinical trials.  相似文献   

15.
Calculations of the power of statistical tests are important in planning research studies (including meta-analyses) and in interpreting situations in which a result has not proven to be statistically significant. The authors describe procedures to compute statistical power of fixed- and random-effects tests of the mean effect size, tests for heterogeneity (or variation) of effect size parameters across studies, and tests for contrasts among effect sizes of different studies. Examples are given using 2 published meta-analyses. The examples illustrate that statistical power is not always high in meta-analysis. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
BACKGROUND: Few meta-analyses of randomised trials assess the quality of the studies included. Yet there is increasing evidence that trial quality can affect estimates of intervention efficacy. We investigated whether different methods of quality assessment provide different estimates of intervention efficacy evaluated in randomised controlled trials (RCTs). METHODS: We randomly selected 11 meta-analyses that involved 127 RCTs on the efficacy of interventions used for circulatory and digestive diseases, mental health, and pregnancy and childbirth. We replicated all the meta-analyses using published data from the primary studies. The quality of reporting of all 127 clinical trials was assessed by means of component and scale approaches. To explore the effects of quality on the quantitative results, we examined the effects of different methods of incorporating quality scores (sensitivity analysis and quality weights) on the results of the meta-analyses. FINDINGS: The quality of trials was low. Masked assessments provided significantly higher scores than unmasked assessments (mean 2.74 [SD 1.10] vs 2.55 [1.20]). Low-quality trials (score < or = 2), compared with high-quality trials (score > 2), were associated with an increased estimate of benefit of 34% (ratio of odds ratios [ROR] 0.66 [95% CI 0.52-0.83]). Trials that used inadequate allocation concealment, compared with those that used adequate methods, were also associated with an increased estimate of benefit (37%; ROR=0.63 [0.45-0.88]). The average treatment benefit was 39% (odds ratio [OR] 0.61 [0.57-0.65]) for all trials, 52% (OR 0.48 [0.43-0.54]) for low-quality trials, and 29% (OR 0.71 [0.65-0.77]) for high-quality trials. Use of all the trial scores as quality weights reduced the effects to 35% (OR 0.65 [0.59-0.71]) and resulted in the least statistical heterogeneity. INTERPRETATION: Studies of low methodological quality in which the estimate of quality is incorporated into the meta-analyses can alter the interpretation of the benefit of intervention, whether a scale or component approach is used in the assessment of trial quality.  相似文献   

17.
DA Berry 《Canadian Metallurgical Quarterly》1993,12(15-16):1377-93; discussion 1395-404
This paper describes a Bayesian approach to the design and analysis of clinical trials, and compares it with the frequentist approach. Both approaches address learning under uncertainty. But they are different in a variety of ways. The Bayesian approach is more flexible. For example, accumulating data from a clinical trial can be used to update Bayesian measures, independent of the design of the trial. Frequentist measures are tied to the design, and interim analyses must be planned for frequentist measures to have meaning. Its flexibility makes the Bayesian approach ideal for analysing data from clinical trials. In carrying out a Bayesian analysis for inferring treatment effect, information from the clinical trial and other sources can be combined and used explicitly in drawing conclusions. Bayesians and frequentists address making decisions very differently. For example, when choosing or modifying the design of a clinical trial, Bayesians use all available information, including that which comes from the trial itself. The ability to calculate predictive probabilities for future observations is a distinct advantage of the Bayesian approach to designing clinical trials and other decisions. An important difference between Bayesian and frequentist thinking is the role of randomization.  相似文献   

18.
When several clinical trials report multiple outcomes, meta-analyses ordinarily analyse each outcome separately. Instead, by applying generalized-least-squares (GLS) regression, Raudenbush et al. showed how to analyse the multiple outcomes jointly in a single model. A variant of their GLS approach, discussed here, can incorporate correlations among the outcomes within treatment groups and thus provide more accurate estimates. Also, it facilitates adjustment for covariates. In our approach, each study need not report all outcomes nor evaluate all treatments. For example, a meta-analysis may evaluate two or more treatments (one 'treatment' may be a control) and include all randomized controlled trials that report on any subset (of one or more) of the treatments of interest. The analysis omits other treatments that these trials evaluated but that are not of interest to the meta-analyst. In the proposed fixed-effects GLS regression model, study-level and treatment-arm-level covariates may be predictors of one or more of the outcomes. An analysis of rheumatoid arthritis data from trials of second-line drug treatments (used after initial standard therapies prove unsatisfactory for a patient) motivates and applies the method. Data from 44 randomized controlled trials were used to evaluate the effectiveness of injectable gold and auranofin on the three outcomes tender joint count, grip strength, and erythrocyte sedimentation rate. The covariates in the regression model were quality and duration of trial and baseline measures of the patients' disease severity and disease activity in each trial. The meta-analysis found that gold was significantly more effective than auranofin on all three treatment outcomes. For all estimated coefficients, the multiple-outcomes model produced moderate changes in their values and slightly smaller standard errors, to the three separate outcome models.  相似文献   

19.
Homeopathic remedies are advocated for the treatment of postoperative ileus, yet data from clinical trials are inconclusive. We therefore performed meta-analyses of existing clinical trials to determine whether homeopathic treatment has any greater effect than placebo administration on the restoration of intestinal peristalsis in patients after abdominal or gynecologic surgery. We conducted systematic literature searches to identify relevant clinical trials. Meta-analyses were conducted using RevMan software. Separate meta-analyses were conducted for any homeopathic treatment versus placebo; homeopathic remedies of < 12C potency versus placebo; homeopathic remedies of > or = 12C potency versus placebo. A "sensitivity analysis" was performed to test the effect of excluding studies of low methodologic quality. Our endpoint was time to first flatus. Meta-analyses indicated a statistically significant (p < 0.05) weighted mean difference (WMD) in favor of homeopathy (compared with placebo) on the time to first flatus. Meta-analyses of the three studies that compared homeopathic remedies > or = 12C versus placebo showed no significant difference (p > 0.05). Meta-analyses of studies comparing homeopathic remedies < 12C with placebo indicated a statistically significant (p < 0.05) WMD in favor of homeopathy on the time to first flatus. Excluding methodologically weak trials did not substantially change any of the results. There is evidence that homeopathic treatment can reduce the duration of ileus after abdominal or gynecologic surgery. However, several caveats preclude a definitive judgment. These results should form the basis of a randomized controlled trial to resolve the issue.  相似文献   

20.
Reports an error in the original article by J. W. Ray and W. R. Shadish (Journal of Consulting and Clinical Psychology, 1996[Dec], Vol 64(6), 1316–1325). On page 1325, a correction is made to column 1, lines 25–26. (The following abstract of this article originally appeared in record 1996-07086-021). The computation of effect sizes is a key feature of meta-analysis. In treatment outcome meta-analyses, the standardized mean difference statistic on posttest scores (d) is usually the effect size statistic used. However, when primary studies do not report the statistics needed to compute d, many methods for estimating d from other data have been developed. Little is known about the accuracy of these estimates, yet meta-analysts frequently use them on the assumption that they are estimating the same population parameter as d. This study investigates that assumption empirically. On a sample of 140 psychosocial treatment or prevention studies from a variety of areas, the present study shows that these estimates yield results that are often not equivalent to d in either mean or variance. The frequent mixing of d and other estimates of d in past meta-analyses, therefore, may have led to biased effect size estimates and inaccurate significance tests. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号