首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Meta-analysis is now the accepted procedure for summarizing research literatures in areas of applied psychology. Because of the bias for publishing statistically significant findings, while usually rejecting nonsignificant results, our research literatures yield misleading answers to important quantitative questions (e.g., How much better is the average psychotherapy patient relative to a comparable group of untreated controls? How much more aggressive are children who watch a great deal of violent TV than children who watch little or no violence on TV?). While all such research literatures provide overly optimistic meta-analytic estimates, exactly how practically important are these overestimates? Three studies testing the literature on implementation intentions finds only slightly elevated effectiveness estimates. Conversely, in three studies another growing research literature (the efficacy of remote intercessory prayer) is found to be misleading and is in all likelihood not a real effect (i.e., our three studies suggest the literature likely consists of Type I errors). Rules of thumb to predict which research literatures are likely invalid are offered. Finally, revised publication and data analysis procedures to generate unbiased research literatures in the future are examined. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Objective: The authors conducted a meta-analytic review of adherence–outcome and competence–outcome findings, and examined plausible moderators of these relations. Method: A computerized search of the PsycINFO database was conducted. In addition, the reference sections of all obtained studies were examined for any additional relevant articles or review chapters. The literature search identified 36 studies that met the inclusion criteria. Results: R-type effect size estimates were derived from 32 adherence–outcome and 17 competence–outcome findings. Neither the mean weighted adherence–outcome (r = .02) nor competence–outcome (r = .07) effect size estimates were found to be significantly different from zero. Significant heterogeneity was observed across both the adherence–outcome and competence–outcome effect size estimates, suggesting that the individual studies were not all drawn from the same population. Moderator analyses revealed that larger competence–outcome effect size estimates were associated with studies that either targeted depression or did not control for the influence of the therapeutic alliance. Conclusions: One explanation for these results is that, among the treatment modalities represented in this review, therapist adherence and competence play little role in determining symptom change. However, given the significant heterogeneity observed across findings, mean effect sizes must be interpreted with caution. Factors that may account for the nonsignificant adherence–outcome and competence–outcome findings reported within many of the studies reviewed are addressed. Finally, the implication of these results and directions for future process research are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Assessed the accuracy of people's stereotypes about gender differences in 2 studies by comparing perceptions of sizes of gender differences with meta-analytic findings. In Study 1, with 184 psychology students, perceptions of variability among men and women and perceptions of mean differences were incorporated into measures of perceived effect sizes. In Study 2, with 145 psychology students, Ss made direct judgments about the size of gender differences. Contrary to previous assertions about people's gender stereotypes, findings indicate that people do not uniformly overestimate gender differences. The results show that Ss are more likely to be accurate or to underestimate gender differences than overestimate them, and perceptions of the size of gender differences are correlated with meta-analytic effect sizes. Furthermore, degree of accuracy is influenced by biases favoring women, in-group favoritism, and the method used to measure perceptions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
This study deals with some of the judgmental factors involved in selecting effect sizes from within the studies that enter a meta-analysis. Particular attention is paid to the conceptual redundancy rule that Smith, Glass, and Miller (1980) used in their study of the effectiveness of psychotherapy for deciding which effect sizes should and should not be counted in determining an overall effect size. Data from a random sample of 25 studies from Smith et al.'s (1980) population of psychotherapy outcome studies were first recoded and then reanalyzed meta-analytically. Using the conceptual redundancy rule, three coders independently coded effect sizes and identified more than twice as many of them per study as did Smith et al. Moreover, the treatment effect estimates associated with this larger sample of effects ranged between .30 and .50, about half the size claimed by Smith et al. Analyses of other rules for selecting effect sizes showed that average effect estimates also varied with these rules. Such results indicate that the average effect estimates derived from meta-analyses may depend heavily on judgmental factors that enter into how effect sizes are selected within each of the individual studies considered relevant to a meta-analysis. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
OBJECTIVE: To examine the impact of home care on hospital days. DATA SOURCES: Search of automated databases covering 1964-1994 using the key words "home care," "hospice," and "healthcare for the elderly." Home care literature review references also were inspected for additional citations. STUDY SELECTION: Of 412 articles that examined impact on hospital use/cost, those dealing with generic home care that reported hospital admissions/cost and used a comparison group receiving customary care were selected (N = 20). STUDY DESIGN: A meta-analytic analysis used secondary data sources between 1967 and 1992. DATA EXTRACTION: Study characteristics that could have an impact on effect size (i.e., country of origin, study design, disease characteristics of study sample, and length of follow-up) were abstracted and coded to serve as independent variables. Available statistics on hospital days necessary to calculate an effect size were extracted. If necessary information was missing, the authors of the articles were contacted. METHODS: Effect sizes and homogeneity of variance measures were calculated using Dstat software, weighted for sample size. Overall effect sizes were compared by the study characteristics described above. PRINCIPAL FINDINGS: Effect sizes indicate a small to moderate positive impact of home care in reducing hospital days, ranging from 2.5 to 6 days (effect sizes of -.159 and -.379, respectively), depending on the inclusion of a large quasi-experimental study with a large treatment effect. When this outlier was removed from analysis, the effect size for studies that targeted terminally ill patients exclusively was homogeneous across study subcategories; however, the effect size of studies that targeted nonterminal patients was heterogeneous, indicating that unmeasured variables or interactions account for variability. CONCLUSION: Although effect sizes were small to moderate, the consistent pattern of reduced hospital days across a majority of studies suggests for the first time that home care has a significant impact on this costly outcome.  相似文献   

6.
Fillingim and Maixner (Fillingim, R.B. and Maixner, W., Pain Forum, 4(4) (1995) 209-221) recently reviewed the body of literature examining possible sex differences in responses to experimentally induced noxious stimulation. Using a 'box score' methodology, they concluded the literature supports sex differences in response to noxious stimuli, with females displaying greater sensitivity. However, Berkley (Berkley, K.J., Pain Forum, 4(4) (1995) 225-227) suggested the failure of a number of studies to reach statistical significance suggests the effect may be small and of little practical significance. This study used meta-analytic methodology to provide quantitative evidence to address the question of the magnitude of these sex differences in response to experimentally induced pain. We found the effect size to range from large to moderate, depending on whether threshold or tolerance were measured and which method of stimulus administration was used. The values for pressure pain and electrical stimulation, for both threshold and tolerance measures, were the largest. For studies employing a threshold measure, the effect for thermal pain was smaller and more variable. The failures to reject the null hypothesis in a number of these studies appear to have been a function of lack of power from an insufficient number of subjects. Given the estimated effect size of 0.55 threshold or 0.57 for tolerance, 41 subjects per group are necessary to provide adequate power (0.70) to test for this difference. Of the 34 studies reviewed by Fillingim and Maixner, only seven were conducted with groups of this magnitude. The results of this study compels to caution authors to obtain adequate sample sizes and hope that this meta-analytic review can aid in the determination of sample size for future studies.  相似文献   

7.
Several sources of indirect evidence supporting the value of graduate training in psychotherapy are reviewed here. Training protocols that are known to enhance trainees' skills are briefly discussed, as are conclusions of meta-analytic reviews examining relationships between therapist experience and training, and therapy outcome. An updated meta-analysis of therapy outcome studies involving within-study comparisons of psychotherapists of different levels of training and experience is summarized. It is concluded that a variety of outcome sources are associated with modest effect sizes favoring more trained therapists. In many outpatient settings, therapists with more training tend to suffer fewer therapy dropouts than less trained therapists. Shortcomings of available research and speculations about possible variables influencing outcomes are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
When a meta-analysis on results from experimental studies is conducted, differences in the study design must be taken into consideration. A method for combining results across independent-groups and repeated measures designs is described, and the conditions under which such an analysis is appropriate are discussed. Combining results across designs requires that (a) all effect sizes be transformed into a common metric, (b) effect sizes from each design estimate the same treatment effect, and (c) meta-analysis procedures use design-specific estimates of sampling variance to reflect the precision of the effect size estimates. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

9.
R. B. Cialdini and J. Fultz (see record 1990-14623-001) questioned the validity of M. Carlson and N. Miller's (see record 1987-31249-001) method and objected to 3 reported tests of their negative state relief (NSR) model of mood-induced increments to helpfulness. In response, evidence is presented that the use of judges to define variables is a common tool in psychology and, when used within meta-analyses, consistently meets the relevant criteria of convergent, discriminant trait, and construct validity. Multiple new tests based directly on the more objective criteria that Cialdini and Fultz stipulate for defining NSR variables fail to support their model. New data they presented to challenge the discriminant validity of judges' ratings are shown to be based on methodologically and conceptually flawed procedures. Finally, they reported a (nonsignificant) positive correlation between sadness-manipulation check effect sizes and helpfulness effect sizes, in support of the NSR model. When this correlation was computed in 9 different ways on the basis of more extensive sets of studies, it was uniformly close to zero and nonsignificant, thereby supporting Carlson and Miller's prior meta-analytic outcome. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
This article examines whether differences in the equations commonly used to calculate effect size for single group pretest-posttest (SGPP) designs versus those for control group designs can account for the finding that SGPP designs yield larger mean effect sizes (e.g., M. S. Lipsey & D. B. Wilson, 1993). It was found that the assumptions of no control group effect and the equivalence of pretraining and posttraining dependent variable standard deviations required for these equations to produce equivalent estimates of effect size were violated for some dependent variable types. Results indicate that control group effects and inflation in the standard deviation of the posttraining dependent variable measure account for most of the observed difference in effect size. The most severe violations occurred when the dependent variable was a knowledge assessment. Methods for including data from SGPP designs in meta-analyses that minimize potential biases are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
The purpose of this article is to propose a simple effect size estimate (obtained from the sample size, N, and a p value) that can be used (a) in meta-analytic research where only sample sizes and p values have been reported by the original investigator, (b) where no generally accepted effect size estimate exists, or (c) where directly computed effect size estimates are likely to be misleading. This effect size estimate is called requivalent because it equals the sample point-biserial correlation between the treatment indicator and an exactly normally distributed outcome in a two-treatment experiment with N/2 units in each group and the obtained p value. As part of placing requlvaient into a broader context, the authors also address limitations of requivalent. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Reviews treatment studies using the Self-Rating Depression Scale (SRDS), the Beck Depression Inventory (BDI), and the Hamilton Rating Scale for Depression (HRSD) as dependent measures. The use of meta-analytic techniques resulted in a comparison of effect sizes, indicating that contrary to some clinicians' beliefs, the SRDS and BDI showed significantly less change in depression following treatment than did the HRSD. Implications for the selection of outcome measures and for the application of meta-analytic techniques to compare dependent measures are discussed. (16 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
The differential effectiveness of group psychotherapy was estimated in a meta-analysis of 111 experimental and quasi-experimental studies published over the past 20 years. A number of client, therapist, group, and methodological variables were examined in an attempt to determine specific as well as generic effectiveness. Three different effect sizes were computed: active versus wait list, active versus alternative treatment, and pre- to posttreatment improvement rates. The active versus wait list overall effect size (0.58) indicated that the average recipient of group treatment is better off than 72% of untreated controls. Improvement was related to group composition, setting, and diagnosis. Findings are discussed within the context of what the authors have learned about group treatment, meta-analytic studies of the extant group literature, and what remains for future research. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Comparative studies of psychotherapy often find few or no differences in the outcomes that alternative treatments produce. Although these findings may reflect the comparability of alternative treatments, studies are often not sufficiently powerful to detect the sorts of effect sizes likely to be found when two or more treatments are contrasted. The present survey evaluated the power of psychotherapy outcome studies to detect differences for contrasts of two or more treatments and treatment vs no-treatment. 85 outcome studies were drawn from 9 journals over a 3-yr period (1984–1986). Data in each article were examined first to provide estimates of effect sizes and then to evaluate statistical power at posttreatment and follow-up. Findings indicate that the power of studies to detect differences between treatment and no treatment is quite adequate given the large effect sizes usually evident for this comparison. However, the power is relatively weak to detect the small-to-medium effect sizes likely to be evident when alternative treatments are contrasted. Thus, the equivalent outcomes that treatments produce may be due to the relatively weak power of the tests. Implications for interpreting outcome studies and for designing comparative studies are highlighted. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
An intuitively appealing indicator of magnitude of effect in applied research is an estimate of the probability of the superior outcome of one treatment over another. Parametric and nonparametric estimates are discussed, as is a meta-analytic estimate. Estimates from values of t, the point-biserial correlation, and standardized effect size are presented. A new perspective on J. Cohen's ( 1988) standards for small, medium, and large effect sizes is provided. Psychologists who are conducting applied primary research or meta-analyses are urged to include such estimation in their reports. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Cognitive biases have been theorized to play a critical role in the onset and maintenance of anxiety and depression. Cognitive bias modification (CBM), an experimental paradigm that uses training to induce maladaptive or adaptive cognitive biases, was developed to test these causal models. Although CBM has generated considerable interest in the past decade, both as an experimental paradigm and as a form of treatment, there have been no quantitative reviews of the effect of CBM on anxiety and depression. This meta-analysis of 45 studies (2,591 participants) assessed the effect of CBM on cognitive biases and on anxiety and depression. CBM had a medium effect on biases (g = 0.49) that was stronger for interpretation (g = 0.81) than for attention (g = 0.29) biases. CBM further had a small effect on anxiety and depression (g = 0.13), although this effect was reliable only when symptoms were assessed after participants experienced a stressor (g = 0.23). When anxiety and depression were examined separately, CBM significantly modified anxiety but not depression. There was a nonsignificant trend toward a larger effect for studies including multiple training sessions. These findings are broadly consistent with cognitive theories of anxiety and depression that propose an interactive effect of cognitive biases and stressors on these symptoms. However, the small effect sizes observed here suggest that this effect may be more modest than previously believed. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

17.
The social relations model presented in this article provides a solution to some of the problems that plague group psychotherapy research. The model was designed to analyze nonindependent data and can be used to study the ways in which group members interrelate and influence one another. The components of the social relations model are the constant (i.e., group effect), the perceiver effect, the target effect, the relationship effect, and error. By providing estimates of the magnitude of these 5 factors and by examining the relationships among these factors, the social relations model allows investigators to examine a host of research questions that have been inaccessible. Examples of applications of the social relations model to issues of group leadership, interpersonal feedback, and process and outcome research are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
This study estimates pretreatment-posttreatment effect size benchmarks for the treatment of major depression in adults that may be useful in evaluating psychotherapy effectiveness in clinical practice. Treatment efficacy benchmarks for major depression were derived for 3 different types of outcome measures: the Hamilton Rating Scale for Depression (M. A. Hamilton, 1960, 1967), the Beck Depression Inventory (A. T. Beck, 1978; A. T. Beck & R. A. Steer, 1987), and an aggregation of low reactivity-low specificity measures. These benchmarks were further refined for 3 conditions: treatment completers, intent-to-treat samples, and natural history (wait-list) conditions. The study confirmed significant effects of outcome measure reactivity and specificity on the pretreatment-posttreatment effect sizes. The authors provide practical guidance in using these benchmarks to assess treatment effectiveness in clinical settings. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
The transportability of Multisystemic Therapy (MST) for the treatment of juvenile offenders in a community-based context was examined in the current study. Results of this New Zealand study showed that significant pre- to posttreatment improvements occurred on most indicators of ultimate (i.e., offending behavior) and instrumental (i.e., youth compliance, family relations) treatment outcomes. Reductions in offending frequency and severity continued to improve across the 6- and 12-month follow-up intervals. In comparison to benchmarked studies, the current study demonstrated a more successful treatment completion rate. Additionally, overall treatment effect sizes were found to be clinically equivalent with the results of previous MST outcome studies with juvenile offenders and significantly greater than the effect sizes found in the control conditions. The findings of this evaluation add to the growing body of evidence that supports MST as an effective treatment for antisocial youth. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Educational researchers assess self-efficacy by asking students to rate their capability of succeeding at specific target tasks (e.g., math test items) and then testing their performance to actually solve similar test items. Pajares and colleagues (Pajares & Kranzler, 1995; Pajares & Miller, 1994, 1995, in press) argued for the use of identical items to assess self-efficacy and performance in order to maximize self-efficacy's predictive power. In two studies, structural equation models (SEM) demonstrated that this variation led to positively biased estimates of paths from self-efficacy to performance and negatively biased estimates of paths from self-concept to performance. Whereas corrections for this bias did not substantially alter the size of effects or substantive interpretations, results from both studies were consistent with a priori predictions about the nature of this bias. Researchers are encouraged to use similar but not identical items to assess self-efficacy and performance, a construct validity approach to interrogate their interpretations, more diverse outcome measures, and SEM approaches like those demonstrated here.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号