首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Reports an error in the original article by J. Krueger (American Psychologist, 2001, Vol 56[1], pp. 16-26). In Figure 2, on page 22, two of the curves are labeled "p(H0)=.9" and "p(H0)=.1." These labels should have been reversed. (The following abstract of this article originally appeared in record 2001-16601-002.) Null hypothesis significance testing (NHST) is the researcher's workhorse for making inductive inferences. This method has often been challenged, has occasionally been defended, and has persistently been used through most of the history of scientific psychology. This article reviews both the criticisms of NHST and the arguments brought to its defense. The review shows that the criticisms address the logical validity of inferences arising from NHST, whereas the defenses stress the pragmatic value of these inferences. The author suggests that both critics and apologists implicitly rely on Bayesian assumptions. When these assumptions are made explicit, the primary challenge for NHST--and any system of induction--can be confronted. The challenge is to find a solution to the question of replicability. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Null hypothesis significance testing (NHST) is arguably the most widely used approach to hypothesis evaluation among behavioral and social scientists. It is also very controversial. A major concern expressed by critics is that such testing is misunderstood by many of those who use it. Several other objections to its use have also been raised. In this article the author reviews and comments on the claimed misunderstandings as well as on other criticisms of the approach, and he notes arguments that have been advanced in support of NHST. Alternatives and supplements to NHST are considered, as are several related recommendations regarding the interpretation of experimental data. The concluding opinion is that NHST is easily misunderstood and misused but that when applied with good judgment it can be an effective aid to the interpretation of experimental data. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Null hypothesis statistical testing (NHST) has been debated extensively but always successfully defended. The technical merits of NHST are not disputed in this article. The widespread misuse of NHST has created a human factors problem that this article intends to ameliorate. This article describes an integrated, alternative inferential confidence interval approach to testing for statistical difference, equivalence, and indeterminacy that is algebraically equivalent to standard NHST procedures and therefore exacts the same evidential standard. The combined numeric and graphic tests of statistical difference, equivalence, and indeterminacy are designed to avoid common interpretive problems associated with NHST procedures. Multiple comparisons, power, sample size, test reliability, effect size, and cause-effect ratio are discussed. A section on the proper interpretation of confidence intervals is followed by a decision rule summary and caveats. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
There has been much recent attention given to the problems involved with the traditional approach to null hypothesis significance testing (NHST). Many have suggested that, perhaps, NHST should be abandoned altogether in favor of other bases for conclusions such as confidence intervals and effect size estimates (e.g., F. L. Schmidt; see record 83-24994) . The purposes of this article are to (a) review the function that data analysis is supposed to serve in the social sciences, (b) examine the ways in which these functions are performed by NHST, (c) examine the case against NHST, and (d) evaluate interval-based estimation as an alternative to NHST. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Responds to comments by W. W. Tryon, R. E. McGrath, R. G. Malgady, R. Falk, B. Thompson, and M. M. Granaas (see records 1998-04417-011, 1998-04417-012, 1998-04417-013, 1998-04417-014, 1998-04417-015, and 1998-04417-016, respectively) on the author's article (see record 1997-02239-002) defending use of the null hypothesis statistical test (NHST). The logic of NHST has been challenged by 3 claims: (1) the null hypothesis is always false; therefore, a test of the null hypothesis is only a search for what is already known to be true; (2) the form of logic on which NHST rests is flawed; and (3) NHST does not tell one what one wants to know. In attempting to rebut these claims, while there may be good reasons to give up NHST, these particular points are not the reason why. Key points of each commentary are addressed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Jacob Cohen (see record 1995-12080-001) raised a number of questions about the logic and information value of the null hypothesis statistical test (NHST). Specifically, he suggested that: (1) The NHST does not tell us what we want to know; (2) the null hypothesis is always false; and (3) the NHST lacks logical integrity. It is the author's view that although there may be good reasons to give up the NHST, these particular points made by Cohen are not among those reasons. When addressing these points, the author also attempts to demonstrate the elegance and usefulness of the NHST. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Confidence intervals (CIs) for means are frequently advocated as alternatives to null hypothesis significance testing (NHST), for which a common theme in the debate is that conclusions from CIs and NHST should be mutually consistent. The authors examined a class of CIs for which the conclusions are said to be inconsistent with NHST in within-subjects designs and a class for which the conclusions are said to be consistent. The difference between them is a difference in models. In particular, the main issue is that the class for which the conclusions are said to be consistent derives from fixed-effects models with subjects fixed, not mixed models with subjects random. Offered is mixed model methodology that has been popularized in the statistical literature and statistical software procedures. Generalizations to different classes of within-subjects designs are explored, and comments on the future direction of the debate on NHST are offered. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Comments on the article by R. L. Hagen (see record 1997-02239-002) praising the null hypothesis statistical test (NHST). Hagen's praise of the NHST may be supported on purely technical grounds but it is unfortunate if it prolongs primary reliance on NHST to evaluate quantitative difference and equivalence given the prominent human factors problem of widespread and intractable interpretation errors. Alternative methods are available for these purposes that are far less subject to misinterpretation. The science of psychology can openly benefit by supplementing, if not replacing, NHST practices with these methods. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Research on general thinking abilities—productive, higher order, critical, and creative thinking—has progressed slowly compared with the rapid progress that has been made in the study of cognitive structures and procedures. As alternatives to currently prevailing assumptions, three framing assumptions for the study of thinking are proposed, involving situated cognition, personal and social epistemologies, and conceptual competence. Evidence consistent with these assumptions is outlined, and topics in the psychology of thinking are discussed in relation to the assumptions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
This project studied the intercorrelations and long-term stabilities of standard Wechsler Adult Intelligence Scale—Revised (WAIS—R), Wechsler Memory Scale—Revised (WMS—R), and Auditory–Verbal Learning Test (AVLT) summary indexes. It also reports similar data on the recently published Mayo Cognitive Factor Scales (MCFS), which are derivative indexes for the combined administrations of these 3 tests. These analyses challenge 2 assumptions that most psychologists make when interpreting adult cognitive tests: (a) that for cognitively normal people, performance in one cognitive domain correlates well with and predicts functioning in other cognitive domains, and (b) that in the absence of pathology, cognition is stable. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
The question of whether lay attributors are biased in their discounting of 1 cause given an alternative cause has not been resolved by decades of research, largely due to the lack of a clear standard for the rational amount of discounting. The authors propose a normative model in which the attributor's causal schemas and discounting inferences are represented in terms of subjective probability. The analysis examines Kelley's (1972b) proposed causal schemas and then other schemas for multiple causes (varying in assumptions about prior probability, sufficiency, correlation, and number of causes) to determine when discounting is rational. It reveals that discounting is implied from most, but not all, possible causal schemas, albeit at varying amounts. Hence, certain patterns of discounting previously interpreted as biases may, in fact, reflect coherent inferences from causal schemas. Results of 2 studies, which measured causal assumptions and inferences, support this interpretation (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
A quiet methodological revolution, a modeling revolution, has occurred over the past several decades, almost without discussion. In contrast, the 20th century ended with contentious argument over the utility of null hypothesis significance testing (NHST). The NHST controversy may have been at least partially irrelevant, because in certain ways the modeling revolution obviated the NHST argument. I begin with a history of NHST and modeling and their relation to one another. Next, I define and illustrate principles involved in developing and evaluating mathematical models. Following, I discuss the difference between using statistical procedures within a rule-based framework and building mathematical models from a scientific epistemology. Only the former is treated carefully in most psychology graduate training. The pedagogical implications of this imbalance and the revised pedagogy required to account for the modeling revolution are described. To conclude, I discuss how attention to modeling implies shifting statistical practice in certain progressive ways. The epistemological basis of statistics has moved away from being a set of procedures, applied mechanistically, and moved toward building and evaluating statistical and scientific models. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
The process of reexamining the methodological and metatheoretical assumptions of personality psychology over the past two decades has been useful for both critics and practitioners of personality research. Although the field has progressed substantially, some critics continue to raise 1960s-vintage complaints, and some researchers perpetuate earlier abuses. We believe that a single issue—construct validity—underlies the perceived and actual shortcomings of current assessment-based personality research. Unfortunately, many psychologists seem unaware of the extensive literature on construct validity. This article reviews five major contributions to our understanding of construct validity and discusses their importance for evaluating new personality measures. This review is intended as a guide for practitioners as well as an answer to questions raised by critics. Because the problem of construct validity is generic to our discipline, these issues are significant not only for personality researchers but also for psychologists in other domains. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Discusses the multitrait-multimethod (MTMM) matrix which is widely used by researchers in applied psychology to assess the convergent and discriminant validity of constructs. A path analytic conceptualization of MTMM matrix analysis shows that the appropriateness of the inferences drawn on the basis of this method is heavily dependent upon the extent to which its underlying assumptions are fulfilled. To test these assumptions, as well as to interpret the relationships in the MTMM matrix, it is recommended that a confirmatory factor analytic model be used. This technique is illustrated by a reanalysis of J. Wanous and E. Lawler's (see record 1972-27988-001) MTMM matrix; the reanalysis shows how faulty inferences were drawn due to a violation of the assumptions underlying the matrix. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Replies to H. W. Reese and M. L. Schack's (see PA, Vol. 51:Issue 6) criticisms of the present author's earlier conclusion that judgments are not subject to any known source of error and explanations are subject to Type II error. It is shown that these criticisms are based on incorrect assumptions about Piaget's theory and Brainerd's conclusion. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Analyzes the Stanford Prison experiment of P. G. Zimbardo et al (1973) and questions, on methodological grounds, various of their inferences. Empirical evidence is presented to elucidate and buttress these criticisms, and an alternative interpretation of the outcome of the Stanford experiment is proposed. (28 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Bayes rules.     
Responds to the F. Schmidt and J. Hunter (see record 2002-10575-012), J. L. Brand (see record 2002-10575-013), R. K. Guenther (see record 2002-10575-014), K. A. Markus (see record 2002-10575-015), and S. G. Hofmann (see record 2002-10575-016) comments on the J. Krueger (see record 2001-16601-002) discussion on null hypothesis significance testing (NHST). Krueger responds to each of the criticisms in turn. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Some methodologists have recently suggested that scientific psychology's overreliance on null hypothesis significance testing (NHST) impedes the progress of the discipline. In response, a number of defenders have maintained that NHST continues to play a vital role in psychological research. Both sides of the argument to date have been presented abstractly. The authors take a different approach to this issue by illustrating the use of NHST along with 2 possible alternatives (meta-analysis as a primary data analysis strategy and Bayesian approaches) in a series of 3 studies. Comparing and contrasting the approaches on actual data brings out the strengths and weaknesses of each approach. The exercise demonstrates that the approaches are not mutually exclusive but instead can be used to complement one another. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Comments on the article by R. L. Hagen (see record 1997-02239-002) defending the logic and practice of null hypothesis statistical testing (NHST). It is argued that model fitting provides an approach to data analysis that is more appropriate to the cognitive needs of the researcher than is NHST. Model fitting combines the NHST ability to falsify hypotheses with the parameter-estimation characteristic of confidence intervals in an approach that is simpler to learn, understand, and use. Effect size estimation is central to the approach, and power calculations are vastly simplified relative to NHST. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Cure rate estimation is an important issue in clinical trials for diseases such as lymphoma and breast cancer and mixture models are the main statistical methods. In the last decade, mixture models under different distributions, such as exponential, Weibull, log-normal and Gompertz, have been discussed and used. However, these models involve stronger distributional assumptions than is desirable and inferences may not be robust to departures from these assumptions. In this paper, a mixture model is proposed using the generalized F distribution family. Although this family is seldom used because of computational difficulties, it has the advantage of being very flexible and including many commonly used distributions as special cases. The generalised F mixture model can relax the usual stronger distributional assumptions and allow the analyst to uncover structure in the data that might otherwise have been missed. This is illustrated by fitting the model to data from large-scale clinical trials with long follow-up of lymphoma patients. Computational problems with the model and model selection methods are discussed. Comparison of maximum likelihood estimates with those obtained from mixture models under other distributions are included.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号