首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 15 毫秒
1.
司春杰 《山东冶金》2001,23(3):47-49
为验证试验室内部、试验室之间的综合能力,进行了比对试验。通过统计分析3个试验室的比对试验结果,认为伸长率离散原因是测量和断口方面的问题;强度数据差异原因是测力误差。为此,应熟练拉伸试验速度的控制,注意试样的对接和断后标距的测量。  相似文献   

2.
The authors argue that a robust version of Cohen's effect size constructed by replacing population means with 20% trimmed means and the population standard deviation with the square root of a 20% Winsorized variance is a better measure of population separation than is Cohen's effect size. The authors investigated coverage probability for confidence intervals for the new effect size measure. The confidence intervals were constructed by using the noncentral t distribution and the percentile bootstrap. Over the range of distributions and effect sizes investigated in the study, coverage probability was better for the percentile bootstrap confidence interval. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
The standard Pearson correlation coefficient is a biased estimator of the true population correlation, ρ, when the predictor and the criterion are range restricted. To correct the bias, the correlation corrected for range restriction, rc, has been recommended, and a standard formula based on asymptotic results for estimating its standard error is also available. In the present study, the bootstrap standard-error estimate is proposed as an alternative. Monte Carlo simulation studies involving both normal and nonnormal data were conducted to examine the empirical performance of the proposed procedure under different levels of ρ, selection ratio, sample size, and truncation types. Results indicated that, with normal data, the bootstrap standard-error estimate is more accurate than the traditional estimate, particularly with small sample size. With nonnormal data, performance of both estimates depends critically on the distribution type. Furthermore, the bootstrap bias-corrected and accelerated interval consistently provided the most accurate coverage probability for ρ. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Ignoring a nested factor can influence the validity of statistical decisions about treatment effectiveness. Previous discussions have centered on consequences of ignoring nested factors versus treating them as random factors on Type I errors and measures of effect size (B. E. Wampold & R. C. Serlin, see record 2000-16737-003). The authors (a) discuss circumstances under which the treatment of nested provider effects as fixed as opposed to random is appropriate; (b) present 2 formulas for the correct estimation of effect sizes when nested factors are fixed; (c) present the results of Monte Carlo simulations of the consequences of treating providers as fixed versus random on effect size estimates, Type I error rates, and power; and (d) discuss implications of mistaken considerations of provider effects for the study of differential treatment effects in psychotherapy research. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
In the articles, "The Statistic With the Smaller Critical Value," by H. J. Keselman (Psychological Bulletin, 1974, Vol. 81, No. 2, pp. 130-131) in record 1975-22206-001, and "Tukey Tests for Pair-wise Contrasts Following the Analysis of Variance: Is There a Type IV Error?" by H. J. Keselman and Robert Murray (Psychological Bulletin, 1974, Vol. 81, No. 9, pp. 608-609) in record 1975-02174-001, there is an error regarding Jacob Cohen's (Statistical Power Analysis for the Behavioral Sciences; New York: Academic Press, 1969) effect size index, f. The f values and their respective proportions of variance in the two articles are larger than the values that Cohen has operationally defined as small, medium, and large. However, it is important to note that this misrepresentation neither invalidates nor limits the usefulness of the multiple-comparison results. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Reviews Kline's book (see record 2004-13019-000) which reviews the controversy regarding significance testing, offers methods for effect size and confidence interval estimation, and suggests some alternative methodologies. Whether or not one accepts Kline's view of the future of statistical significance testing, there is much of value in this book. As a textbook, it could serve as a reference for an upper level undergraduate course but it would be more appropriate for a graduate course. The book is a thought-provoking examination of the uneasy alliance between null hypothesis significance testing, and effect size and confidence interval estimation. There is much in this book for those on both sides of the null hypothesis testing debate and for those unsure where they stand. Whatever the future holds, Kline has done well in illustrating recent advances to statistical decision-making. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Objective: In 2005, the Journal of Consulting and Clinical Psychology (JCCP) became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest editorial effort to improve statistical reporting practices in any APA journal in at least a decade, in this article we investigate the efficacy of that change. Method: All intervention studies published in JCCP in 2003, 2004, 2007, and 2008 were reviewed. Each article was coded for method of clinical significance, type of ES, and type of associated CI, broken down by statistical test (F, t, chi-square, r/R2, and multivariate modeling). Results: By 2008, clinical significance compliance was 75% (up from 31%), with 94% of studies reporting some measure of ES (reporting improved for individual statistical tests ranging from η2 = .05 to .17, with reasonable CIs). Reporting of CIs for ESs also improved, although only to 40%. Also, the vast majority of reported CIs used approximations, which become progressively less accurate for smaller sample sizes and larger ESs (cf. Algina & Kessleman, 2003). Conclusions: Changes are near asymptote for ESs and clinical significance, but CIs lag behind. As CIs for ESs are required for primary outcomes, we show how to compute CIs for the vast majority of ESs reported in JCCP, with an example of how to use CIs for ESs as a method to assess clinical significance. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
This comment supplements and clarifies issues raised by J. C. Schank and T. J. Koehnle (2009) in their critique of experimental design. First, the pervasiveness of trade-offs in the design of experiments is emphasized (Wiley, 2003). Particularly germane to Schank and Koehnle’s discussion are the inevitable trade-offs in any decisions to include blocking or to standardize conditions in experiments. Second, the interpretation of multiple tests of a hypothesis is clarified. Only when interest focuses on any, rather than each, of N possible responses is it appropriate to adjust criteria for statistical significance of the results. Finally, a misunderstanding is corrected about a disadvantage of large experiments (Wiley, 2003). Experiments with large samples raise the possibility of small, but statistically significant, biases even after randomization of treatments. Because these small biases are difficult for experimenters and readers to notice, large experiments demonstrating small effects require special scrutiny. Such experiments are justified only when they involve minimal human intervention and maximal standardization. Justifications for the inevitable trade-offs in experimental design require careful attention when reporting any experiment. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
The authors conducted a 30-year review (1969-1998) of the size of moderating effects of categorical variables as assessed using multiple regression. The median observed effect size (f2) is only .002, but 72% of the moderator tests reviewed had power of .80 or greater to detect a targeted effect conventionally defined as small. Results suggest the need to minimize the influence of artifacts that produce a downward bias in the observed effect size and put into question the use of conventional definitions of moderating effect sizes. As long as an effect has a meaningful impact, the authors advise researchers to conduct a power analysis and plan future research designs on the basis of smaller and more realistic targeted effect sizes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
In furtherance of its efforts to increase the effectiveness of the use of psychological and educational tests, the American Psychological Association's Division 5 (Evaluation and Measurement) appointed a Test Use Committee in the fall of 1954. At the time of appointing this committee, its mission was to carry on where the Committee on Test Standards left off and its particular concern would be to consider problems of training test users and to study other ways in which services involving the use of tests could be improved. After some study, the committee decided that an appropriate starting point for its activity would be to conduct a survey of the training programs, course offerings, and placement procedures for high level specialists in the field of measurement, since such specialists exercise a major influence on the use of tests. In the summer of 1956 a questionnaire was mailed to 83 Division 5 members who were deemed likely to have some connection with measurement and evaluation training programs at the graduate level. The questionnaire items and the responses are summarized in this report. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Underpowered studies persist in the psychological literature. This article examines reasons for their persistence and the effects on efforts to create a cumulative science. The "curse of multiplicities" plays a central role in the presentation. Most psychologists realize that testing multiple hypotheses in a single study affects the Type I error rate, but corresponding implications for power have largely been ignored. The presence of multiple hypothesis tests leads to 3 different conceptualizations of power. Implications of these 3 conceptualizations are discussed from the perspective of the individual researcher and from the perspective of developing a coherent literature. Supplementing significance tests with effect size measures and confidence intervals is shown to address some but not necessarily all problems associated with multiple testing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
A meta-analysis comparing "undirected" and "conceptual" MMPI studies, and conceptual Rorschach and MMPI studies, indicated the following conclusions, (a) Conceptual work more successfully validates an assessment instrument than does undirected investigation, (b) The validatory success of the "average" conceptual Rorschach study is comparable to that of similar MMPI work. This finding suggests that the former's questionable status may be based on sociocultural factors, rather than scientific ones, (c) The "average" conceptual Rorschach or MMPI study has only modest explanatory power, (d) Investigators' misuse of X2 has resulted in exaggerated effect size in many instances where the statistic was employed. It is suggested that future research be judged on the coherence of its inference processes, the specificity of its predictions, and the amount of variance it explains. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Reports an error in the original article by C. Raymond Millimet and Roger P. Greenberg (Journal of Consulting and Clinical Psychology, 1973[Apr], 40[2], 188-195). On pages 192 and 193, the omega-square values of Tables 4 and 6 are incorrect. While the correct values are considerably smaller, their relative magnitude remains unchanged, and their interpretation as discussed in the article also remains unchanged. A copy of the corrected tables may be obtained by writing C. Raymond Millimet, Department of Psychology, University of Nebraska, Omaha, Nebraska 68101. (The following abstract of this article originally appeared in record 1973-31813-001.) Asked 11 clinical psychologists and 1 counseling psychologist to judge 3 behavioral-neurological signs and 3 psychometric signs in various combinations of presence or absence. Results are consistent with previous findings that a linear model adequately accounts for the variability of a judge's responses. The high interjudge agreement correlations and test-retest reliability estimates strongly suggest that psychologists can render reliable and mutually consistent judgments and are discussed in terms of symptom complexity and diversity. 5 of the 6 symptoms made a moderate to sizable contribution toward a diagnosis of organicity, especially the symptom emphasizing the presence of an unusual gait and some trouble grasping objects. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Two expanded models (i.e., mediated and moderated) of the theory of work adjustment (TWA; R. V. Dawis, G. England, & L. H. Lofquist, 1964; R. V. Dawis & L. H. Lofquist, 1984) were tested for their capacity to explain the job satisfaction of a sample of lesbian, gay, and bisexual employees (N=397). Consistent with cultural critiques of the TWA, person-organization fit perceptions were tested as the mediator of the relationship between heterosexism and job satisfaction in one set of hypotheses, and experiences with informal heterosexism were tested as a moderator in the relationship between person-organization fit perceptions and job satisfaction in a separate set of hypotheses. The mediated model but not the moderated model was supported. Results were confirmed by a cross-validation sample. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Generation effect (generated words are better memorized than read words) of anagrams, rhymes, and associates of target words was examined in young, elderly, and very old subjects. Experiments 1 and 2 showed that only young subjects benefit from the generation effect in a free-recall test when the rule is of a phonological nature. Experiments 3, 4, and 5 showed that the generation effect of rhymes was due to a resources-dependent self-initiated process. Experiments 4 and 5 showed that in a divided-attention situation, generation effect of rhymes is not significant in young subjects, but that the generation effect of semantic associates remains significant for both groups (Experiment 5). The results are discussed within the environmental support framework and the transfer-appropriate processing framework. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
17.
Replies to comments made by G. E. Gignac (see record 2005-06671-010) on the current authors' original article (see record 2003-02341-015). Gignac reanalyzed the factor structure of the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT) and found results that differed from those the authors obtained initially. The authors tracked down the surprising sources of those discrepancies. G. E. Gignac's hierarchical model of emotional intelligence appears promising, and the authors anticipate that further investigations of the MSCEIT factor structure may yield additional information. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
In their comments on the authors' article (see record 2003-10163-009), R. C. Serlin, B. E. Wampold, and J. R. Levin (see record 2003-10163-011) and P. Crits-Christoph, X. Tu, and R. Gallop (see record 2003-10163-010) took issue with the authors' suggestion to evaluate therapy studies with nested providers with a fixed model approach. In this rejoinder, the authors' comment on Serlin et al's critique by showing that their arguments do not apply, are based on misconceptions about the purpose and nature of statistical inference, or are based on flawed reasoning. The authors also comment on Crits-Christoph et al's critique by showing that the proposed approach is very similar to, but less inclusive than, their own suggestion. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Latent constructs involved in California Verbal Learning Test (D. C. Delis, J. H. Kramer, E. Kaplan, & B. A. Ober, 1987) performance were examined using confirmatory factor analysis in 388 epilepsy surgery candidates. Eight factor models were compared. A single-factor model was examined, along with 7 models accommodating constructs of auditory attention, inaccurate recall, and delayed recall in different combinations. The retained model consisted of 3 correlated factors: Auditory Attention. Verbal Learning, and Inaccurate Recall. Validity of this factor structure was examined in a subsample of patients with left and right temporal lobe epilepsy. All 3 factors were related to seizure focus and magnetic resonance imaging hippocampal volume. Only Verbal Learning was related to hippocampal neuropathology, supporting the distinction between learning and attention in the factor structure. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号