首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Philip Kendall's (1997) editorial encouraged authors in the Journal of Consulting and Clinical Psychology (JCCP) to report effect sizes and clinical significance. The present authors assessed the influence of that editorial--and other American Psychological Association initiatives to improve statistical practices--by examining 239 JCCP articles published from 1993 to 2001. For analysis of variance, reporting of means and standardized effect sizes increased over that period, but the rate of effect size reporting for other types of analyses surveyed remained low. Confidence interval reporting increased little, reaching 17% in 2001. By 2001, the percentage of articles considering clinical (not only statistical) significance was 40%, compared with 36% in 1996. In a follow-up survey of JCCP authors (N=62), many expressed positive attitudes toward statistical reform. Substantially improving statistical practices may require stricter editorial policies and further guidance for authors on reporting and interpreting measures. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
The reporting and interpretation of effect sizes in addition to statistical significance tests is becoming increasingly recognized as good research practice, as evidenced by the editorial policies of at least 23 journals that now require effect sizes. Statistical significance tests are limited in the information they provide readers about results, and effect sizes can be useful when evaluating result importance. The current article (a) summarizes statistical versus practical significance, (b) briefly discusses various effect size options, (c) presents a review of research articles published in the International Journal of Play Therapy (1993-2003) regarding use of effect sizes and statistical significance tests, and (d) provides recommendations for improved research practice in the journal and elsewhere. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
The proportion of studies that use one-tailed statistical significance tests (π) in a population of studies targeted by a meta-analysis can affect the bias of the sample effect sizes (sample ESs, or ds) that are accessible to the meta-analyst. H. C. Kraemer, C. Gardner, J. O. Brooks, and J. A. Yesavage (1998) found that, assuming π?=?1.0, for small studies (small Ns) the overestimation bias was large for small population ESs (δ?=?0.2) and reached a maximum for the smallest population ES (viz., δ?=?0). The present article shows (with a minor modification of H. C. Kraemer et al.'s model) that when π?=?0, the small-N bias of accessible sample ESs is relatively small for δ?≤?0.2, and a minimum (in fact, nonexistent) for δ?=?0. Implications are discussed for interpretations of meta-analyses of (a) therapy efficacy and therapy effectiveness studies, (b) comparative outcome studies, and (c) studies targeting small but important population ESs. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Null hypothesis significance testing has dominated quantitative research in education and psychology. However, the statistical significance of a test as indicated by a p-value does not speak to the practical significance of the study. Thus, reporting effect size to supplement p-value is highly recommended by scholars, journal editors, and academic associations. As a measure of practical significance, effect size quantifies the size of mean differences or strength of associations and directly answers the research questions. Furthermore, a comparison of effect sizes across studies facilitates meta-analytic assessment of the effect size and accumulation of knowledge. In the current comprehensive review, we investigated the most recent effect size reporting and interpreting practices in 1,243 articles published in 14 academic journals from 2005 to 2007. Overall, 49% of the articles reported effect size—57% of which interpreted effect size. As an empirical study for the sake of good research methodology in education and psychology, in the present study we provide an illustrative example of reporting and interpreting effect size in a published study. Furthermore, a 7-step guideline for quantitative researchers is also summarized along with some recommended resources on how to understand and interpret effect size. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Discusses 3 issues in applying and interpreting effect-size (ES) estimation in psychological research. First, criteria for choosing an appropriate ES metric and the advisability of adopting a single ES (PV, or the percent of shared variance) for all research is examined. A common metric would (a) forfeit information about study design and (b) permit imprecise problem definitions. Further, the different design-related metrics suggested by J. Cohen (1977) neither enhance nor trivialize the relations they describe. Ways are suggested to substantively evaluate magnitudes of effect within specific topic areas. The interpretive yardsticks include (a) multiple choices of contrasting effect sizes, (b) practical significance, and (c) research methodology. The most informative ES interpretation occurs when ES is compared to other ESs involving the same or similar variables. Finally, the value of reporting significance tests along with effect sizes is scrutinized. The two statistics are not completely redundant since ES cannot indicate that an effect size of zero has been effectively ruled out. (19 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Editorial.     
Along with the new teal cover, the current issue of the Journal of Consulting and Clinical Psychology (JCCP) marks the transition of the journal to a new editorial team. Although the discipline of clinical psychology has a diversity of fine journals, JCCP has long been regarded as a premier journal for publishing high-quality, empirical work in clinical psychology. The intention of the new editorial team is to continue the long-established tradition of excellence for JCCP and to ensure its ongoing influence and responsiveness to important innovations and new directions in contemporary clinical psychology. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
8.
Editorial.     
Once again, a change has occurred. As you probably noticed, the color of the cover for this issue of the Journal of Consulting and Clinical Psychology (JCCP) is different, signaling the “changing of the guard” of the editorial team. JCCP has a long-standing legacy of excellence in publishing high-quality, cutting-edge, and innovative research and scholarship in clinical psychology. Its increasing Institute for Scientific Information (ISI) impact factor rating continues to attest to its influence on the field overall. As the new editor, I intend to do my utmost to preserve this reputation. Moreover, I am humbled upon reflecting on those before me in this position, as well as excited about shepherding such a premier journal over the course of the next 6 years. I am cognizant not only of the impact that this journal has on the science of clinical psychology but also of its influence on clinical practice and service delivery via the dissemination and adoption of evidenced-based interventions. In this context, I am grateful for having been able to assemble such a high-caliber team of associate and consulting editors. This editorial presents some ideas for the future of JCCP in terms of content, structure, and format. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Wider use in psychology of confidence intervals (CIs), especially as error bars in figures, is a desirable development. However, psychologists seldom use CIs and may not understand them well. The authors discuss the interpretation of figures with error bars and analyze the relationship between CIs and statistical significance testing. They propose 7 rules of eye to guide the inferential use of figures with error bars. These include general principles: Seek bars that relate directly to effects of interest, be sensitive to experimental design, and interpret the intervals. They also include guidelines for inferential interpretation of the overlap of CIs on independent group means. Wider use of interval estimation in psychology has the potential to improve research communication substantially. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Serious sequelae of youth depression, plus recent concerns over medication safety, prompt growing interest in the effects of youth psychotherapy. In previous meta-analyses, effect sizes (ESs) have averaged .99, well above conventional standards for a large effect and well above mean ES for other conditions. The authors applied rigorous analytic methods to the largest study sample to date and found a mean ES of .34, not superior but significantly inferior to mean ES for other conditions. Cognitive treatments (e.g., cognitive-behavioral therapy) fared no better than noncognitive approaches. Effects showed both generality (anxiety was reduced) and specificity (externalizing problems were not), plus short- but not long-term holding power. Youth depression treatments appear to produce effects that are significant but modest in their strength, breadth, and durability. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
In a meta-analysis, the authors compared the effectiveness of psychological and pharmacological treatments for panic disorder. Percentage of agoraphobic Ss in the sample and duration of the illness were unrelated to effect size (ES). Type of dependent variable was generally unrelated to treatment outcome, although behavioral measures yielded significantly smaller ESs. Dependent measures of general anxiety, avoidance, and panic attacks yielded larger ESs than did depression measures. Choice of control was related to ES, with comparisons with placebo controls greater than comparisons with exposure-only or "other treatment" controls. Psychological coping strategies involving relaxation training, cognitive restructuring, and exposure yielded the most consistent ESs; flooding and combination treatments (psychological and pharmacological) yielded the next most consistent ESs. Antidepressants were the most effective pharmacological intervention. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
This meta-analysis addressed the question of how effective grief therapy is and for whom, using B. J. Becker's (1988) techniques for analyzing standardized mean-change scores. Analyses were based on 35 studies (N?=?2,284), with a weighted mean effect size (ES) of δ+?=?0.43 (95% confidence interval?=?0.33 to 0.52). Clients in no-treatment control groups showed little improvement (d=?=?0.06), possibly because of the relatively long delay between loss and treatment in most studies (mean delay?=?27 months). Moderators of treatment efficacy included time since loss and relationship to the deceased. Client selection procedures, a methodological factor not originally coded in this meta-analysis, appeared to contribute strongly to variability in ESs: a small number of studies involving self-selected clients produced relatively large ESs, whereas the majority of studies involving clients recruited by the investigators produced ESs in the small to moderate range. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Confidence intervals (CIs) for means are frequently advocated as alternatives to null hypothesis significance testing (NHST), for which a common theme in the debate is that conclusions from CIs and NHST should be mutually consistent. The authors examined a class of CIs for which the conclusions are said to be inconsistent with NHST in within-subjects designs and a class for which the conclusions are said to be consistent. The difference between them is a difference in models. In particular, the main issue is that the class for which the conclusions are said to be consistent derives from fixed-effects models with subjects fixed, not mixed models with subjects random. Offered is mixed model methodology that has been popularized in the statistical literature and statistical software procedures. Generalizations to different classes of within-subjects designs are explored, and comments on the future direction of the debate on NHST are offered. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Studies of homework effects in psychotherapy outcome have produced inconsistent results. Although these findings may reflect the comparability of psychotherapy with and without homework assignments, many of these studies may not have been sensitive enough to detect the effects sizes (ESs) likely to be found when examining homework effects. The present study evaluated the power of homework research and showed that, on average, current power levels are relatively weak in controlled studies ranging from 0.58 for large ESs to 0.09 for small ESs. Thus, inconsistent findings between studies may very well be due to low statistical power. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Increasing emphasis has been placed on the use of effect size reporting in the analysis of social science data. Nonetheless, the use of effect size reporting remains inconsistent, and interpretation of effect size estimates continues to be confused. Researchers are presented with numerous effect sizes estimate options, not all of which are appropriate for every research question. Clinicians also may have little guidance in the interpretation of effect sizes relevant for clinical practice. The current article provides a primer of effect size estimates for the social sciences. Common effect sizes estimates, their use, and interpretations are presented as a guide for researchers. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Editorial.     
This editorial announces that the April 1982 issue of the Journal of Consulting and Clinical Psychology (JCCP) officially marked the end of Robert C. Carson's service as Associate Editor of the journal and that Patricia B. Sutker has accepted appointment as Associate Editor and will replace Carson. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
A meta-analytic review of group comparison design studies evaluating peer-assisted learning (PAL) interventions with elementary school students produced positive effect sizes (ESs) indicating increases in achievement (unweighted mean ES = 0.59, SD = 0.90; weighted ES, d = 0.33, p  相似文献   

18.
Meta-analytic procedures were used to determine the relation between disability compensation and pain. Of the 157 relevant identified studies, only 32 contained quantifiable data from treatment and control groups. The majority of these exclusively examined chronic low back pain patients (72%). Overall, 136 comparisons were obtained, on the basis of 3,802 pain patients and 3,849 controls. Liberal procedures for estimating effect sizes (ESs) yielded an ES of .60 (p?p?  相似文献   

19.
Reviewed the use of statistical significance testing in the 265 quantitative research articles published in Professional Psychology: Research and Practice from 1990 to 1997. 204 (77%) of these articles used statistical significance testing. Less than 20% of the authors correctly used the term statistical significance; many described their results, rather, as simply "significant." 81.9% of authors did follow APA style by including the degrees, alpha levels, and the values of their test statistics when reporting results. However, the majority of authors made no mention of the effect size although the current APA publication manual (APA, 1994) clearly "encourages" authors to include effect size. The implications of these results for both authors and readers are discussed, with clear suggestions for change proferred. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号