首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到15条相似文献,搜索用时 0 毫秒
1.
    
In their criticism of B. E. Wampold and R. C. Serlin's (see record 2000-16737-003) analysis of treatment effects in nested designs, M. Siemer and J. Joormann (see record 2003-10163-009) argued that providers of services should be considered a fixed factor because typically providers are neither randomly selected from a population of providers nor randomly assigned to treatments, and statistical power to detect treatment effects is greater in the fixed than in the mixed model. The authors of the present article argue that if providers are considered fixed, conclusions about the treatment must be conditioned on the specific providers in the study, and they show that in this case generalizing beyond these providers incurs inflated Type I error rates. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Ignoring a nested factor can influence the validity of statistical decisions about treatment effectiveness. Previous discussions have centered on consequences of ignoring nested factors versus treating them as random factors on Type I errors and measures of effect size (B. E. Wampold & R. C. Serlin, see record 2000-16737-003). The authors (a) discuss circumstances under which the treatment of nested provider effects as fixed as opposed to random is appropriate; (b) present 2 formulas for the correct estimation of effect sizes when nested factors are fixed; (c) present the results of Monte Carlo simulations of the consequences of treating providers as fixed versus random on effect size estimates, Type I error rates, and power; and (d) discuss implications of mistaken considerations of provider effects for the study of differential treatment effects in psychotherapy research. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
The authors disagree with M. Siemer and J. Joormann's (see record 2003-10163-009) assertion that therapist should be a fixed effect in psychotherapy treatment outcome studies. If treatment is properly standardized, therapist effects can be examined in preliminary tests and the therapist term deleted from analyses if such differences approach zero. If therapist effects are anticipated and either cannot be minimized through standardization or are specifically of interest because of the nature of the research question, the study has to be planned with adequate statistical power for including therapist as a random term. Simulation studies conducted by Siemer and Joormann confounded bias due to small sample size and inconsistent estimates. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Underpowered studies persist in the psychological literature. This article examines reasons for their persistence and the effects on efforts to create a cumulative science. The "curse of multiplicities" plays a central role in the presentation. Most psychologists realize that testing multiple hypotheses in a single study affects the Type I error rate, but corresponding implications for power have largely been ignored. The presence of multiple hypothesis tests leads to 3 different conceptualizations of power. Implications of these 3 conceptualizations are discussed from the perspective of the individual researcher and from the perspective of developing a coherent literature. Supplementing significance tests with effect size measures and confidence intervals is shown to address some but not necessarily all problems associated with multiple testing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Various statistical methods, developed after 1970, offer the opportunity to substantially improve upon the power and accuracy of the conventional t test and analysis of variance methods for a wide range of commonly occurring situations. The authors briefly review some of the more fundamental problems with conventional methods based on means; provide some indication of why recent advances, based on robust measures of location (or central tendency), have practical value; and describe why modern investigations dealing with nonnormality find practical problems when comparing means, in contrast to earlier studies. Some suggestions are made about how to proceed when using modern methods. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
When treatments are administered in groups, clients interact in ways that lead to violations of a key assumption of most statistical analyses-the assumption of independence of observations. The resulting dependencies, when not properly accounted for, can increase Type I errors dramatically. Of the 33 studies of group-administered treatment on the empirically supported treatments list, none appropriately analyzed their data. The current authors provide corrections that can be applied to improper analyses. After the corrections, only 12.4% to 68.2% of tests that were originally reported as significant remained significant, depending on what assumptions were made about how large the dependencies among observations really are. Of the 33 studies, 6-19 studies no longer had any significant results after correction. The authors end by providing recommendations for researchers planning group-administered treatment research. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
In meta-analysis, the usual way of assessing whether a set of single studies is homogeneous is by means of the Q test. However, the Q test only informs meta-analysts about the presence versus the absence of heterogeneity, but it does not report on the extent of such heterogeneity. Recently, the I2 index has been proposed to quantify the degree of heterogeneity in a meta-analysis. In this article, the performances of the Q test and the confidence interval around the I2 index are compared by means of a Monte Carlo simulation. The results show the utility of the I2 index as a complement to the Q test, although it has the same problems of power with a small number of studies. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
This paper details the challenges encountered by authors summarizing evidence from a primary study to describe a treatment's effectiveness using an effect size (ES) estimate. Dilemmas that are encountered, including how to calculate and interpret the pertinent standardized mean difference ES for results from studies of various research designs, are described. Recommendations are offered to authors of primary studies and to those conducting summaries of primary studies. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

9.
This study examined various factors that affect statistical power in randomized intervention studies with noncompliance. On the basis of Monte Carlo simulations, this study demonstrates how statistical power changes depending on compliance rate, study design, outcome distributions, and covariate information. It also examines how these factors influence power in different methods of estimating intervention effects. Intent-to-treat analysis and complier average causal effect estimation are compared as 2 alternative ways of estimating intervention effects under noncompliance. The results of this investigation provide practical implications in designing and evaluating intervention studies taking into account noncompliance. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
One of the most frequently cited reasons for conducting a meta-analysis is the increase in statistical power that it affords a reviewer. This article demonstrates that fixed-effects meta-analysis increases statistical power by reducing the standard error of the weighted average effect size (T?.) and, in so doing, shrinks the confidence interval around T?.. Small confidence intervals make it more likely for reviewers to detect nonzero population effects, thereby increasing statistical power. Smaller confidence intervals also represent increased precision of the estimated population effect size. Computational examples are provided for 3 effect-size indices: d (standardized mean difference), Pearson's r, and odds ratios. Random-effects meta-analyses also may show increased statistical power and a smaller standard error of the weighted average effect size. However, the authors demonstrate that increasing the number of studies in a random-effects meta-analysis does not always increase statistical power. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
The fixed-effects (FE) meta-analytic confidence intervals for unstandardized and standardized mean differences are based on an unrealistic assumption of effect-size homogeneity and perform poorly when this assumption is violated. The random-effects (RE) meta-analytic confidence intervals are based on an unrealistic assumption that the selected studies represent a random sample from a large superpopulation of studies. The RE approach cannot be justified in typical meta-analysis applications in which studies are nonrandomly selected. New FE meta-analytic confidence intervals for unstandardized and standardized mean differences are proposed that are easy to compute and perform properly under effect-size heterogeneity and nonrandomly selected studies. The proposed meta-analytic confidence intervals may be used to combine unstandardized or standardized mean differences from studies having either independent samples or dependent samples and may also be used to integrate results from previous studies into a new study. An alternative approach to assessing effect-size heterogeneity is presented. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
This article presents confidence interval methods for improving on the standard F tests in the balanced, completely between-subjects, fixed-effects analysis of variance. Exact confidence intervals for omnibus effect size measures, such as or and the root-mean-square standardized effect, provide all the information in the traditional hypothesis test and more. They allow one to test simultaneously whether overall effects are (a) zero (the traditional test), (b) trivial (do not exceed some small value), or (c) nontrivial (definitely exceed some minimal level). For situations in which single-degree-of-freedom contrasts are of primary interest, exact confidence interval methods for contrast effect size measures such as the contrast correlation are also provided. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Reports an error in the article, "Two New Procedures for Studying Validity Generalization," by Nambury S. Raju and Michael J. Burke (Journal of Applied Psychology, Vol. 68, No. 3, pp 382-395). The equation in Step 4 for TSA 2 in Table 1 on page 385 was incorrectly stated; the correct formula is provided. (The following abstract of this article originally appeared in record 1983-31751-001) Several Monte Carlo studies examined the accuracy of 2 new procedures in estimating population true validity mean and variance. Results indicate that 1 of the new procedures provided slightly more accurate estimates than the procedures of F. L. Schmidt and J. E. Hunter (see record 1978-11448-001) and J. C. Callender and H. G. Osburn (see record 1981-00257-001). From a practical point of view, however, the estimates from the various procedures were quite comparable. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Reviews Kline's book (see record 2004-13019-000) which reviews the controversy regarding significance testing, offers methods for effect size and confidence interval estimation, and suggests some alternative methodologies. Whether or not one accepts Kline's view of the future of statistical significance testing, there is much of value in this book. As a textbook, it could serve as a reference for an upper level undergraduate course but it would be more appropriate for a graduate course. The book is a thought-provoking examination of the uneasy alliance between null hypothesis significance testing, and effect size and confidence interval estimation. There is much in this book for those on both sides of the null hypothesis testing debate and for those unsure where they stand. Whatever the future holds, Kline has done well in illustrating recent advances to statistical decision-making. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
In the articles, "The Statistic With the Smaller Critical Value," by H. J. Keselman (Psychological Bulletin, 1974, Vol. 81, No. 2, pp. 130-131) in record 1975-22206-001, and "Tukey Tests for Pair-wise Contrasts Following the Analysis of Variance: Is There a Type IV Error?" by H. J. Keselman and Robert Murray (Psychological Bulletin, 1974, Vol. 81, No. 9, pp. 608-609) in record 1975-02174-001, there is an error regarding Jacob Cohen's (Statistical Power Analysis for the Behavioral Sciences; New York: Academic Press, 1969) effect size index, f. The f values and their respective proportions of variance in the two articles are larger than the values that Cohen has operationally defined as small, medium, and large. However, it is important to note that this misrepresentation neither invalidates nor limits the usefulness of the multiple-comparison results. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号