首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
Addressed interpersonal factors affecting group entrapment and also attempted to delineate a conceptual link between collective entrapment and I. L. Janis's (1972, 1982) notion of groupthink. Two experiments were conducted in which 3-person groups were assigned either majority or unanimity rule as an official consensus requirement for their initial decision. It was expected and confirmed that groups whose initial decision processes were guided by unanimity rule were entrapped more often to the chosen course of action than were groups with majority rule. The results also suggested that homogeneity of members' opinions at the outset of interaction and group's rationalization norm were responsible for the observed difference. Discussion is focused on the implications of these findings for administrative decision contexts and their conceptual link of the notion of groupthink. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Large-sample confidence intervals (CI) for reliability, validity, and unattenuated validity are presented. The CI for unattenuated validity is based on the Bonferroni inequality, which relies on one CI for test–retest reliability and one for validity. Covered are four reliability–validity situations: (a) both estimates were from random samples; (b) reliability was from a random sample but validity was from a selected sample; (c) validity was from a random sample but reliability was from a selected sample; and (d) both estimates were from selected samples. All CIs were evaluated by using a simulation. CIs on reliability, validity, or unattenuated validity are accurate as long as selection ratio is at least 20% and selected sample size is 100 or larger. When selection ratio is less than 20%, estimators tend to underestimate their parameters. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
The sample consisted of 190 Utah physicians fully certified as specialists by an American Board. 80 scores relevant to the performance of these physicians were intercorrelated and factor analyzed using the principal components solution based on eigenvalues and eigenvectors. The 29 factors which had an eigenvalue greater than 1.00 were rotated by the varimax procedure and interpreted. The most important finding was the great criterion complexity for this group of medical specialists. This complexity suggests that one cannot adequately measure physician performance on the basis of a single score or a few scores. Instead, one must obtain a relatively large number of scores. Performance in poth premedical and medical education was independent of performance as a physician. (19 ref.) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Developed scales to assess one individual's trust in another in meaningful interpersonal relationships. For males, the scale included factors of reliableness, emotional trust, and general trust. For females, similar but not identical reliableness and emotional trust factors emerged. The scales demonstrated adequate reliability and were discriminable from the related constructs of liking and love. In Exp I, 435 undergraduates' responses on the Reliableness subscale varied appropriately as a function of the reliable or nonreliable behavior of the target person. In Exp II, 84 undergraduates' responses on the Emotional Trust subscale varied appropriately when the target person either betrayed or did not betray a confidence. In both experiments, the appropriate subscale was more sensitive to experimental manipulations than were the other trust subscales, attesting to the discriminant validity of the trust factors. (27 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
The contribution of decision factors to the meridional variations in line orientation discrimination (OD) was determined for 2-alternative forced-choice experimental designs. With K. O. Johnson's (1980) formalization of decision processes in discrimination tasks, 3 decision factors were identified: decision rule, memory variance, and criterial noise. Exp I (with 13 Ss) showed the effect of experimental design on OD to be similar at horizontal and oblique standard orientations, indicating that the meridional variations in OD were not due to a decision rule anisotropy. In Exp II (with 5 Ss) the effect of the interstimulus interval was also found to be similar at both standard orientations, suggesting that the memory variance is isotropic in the orientation domain. Exps III and IV (with a total of 7 Ss) supported the hypothesis that the meridional variations in OD are not due to a criterial noise anisotropy. Results strongly suggest that the oblique effect in OD is due to sensorial factors rather than to decision factors. Therefore, they further support the hypothesis linking the anisotropy of the preferred orientation distribution of Area 17-S cells (a single physiologically defined class of cells in the primary visual cortex) and the meridional variations in OD. (58 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Contends that in the pattern analysis of any of the Wechsler Intelligence Scales the comparison of S's score on each subtest with that S's average Verbal or Performance subtest score or with the overall average has an advantage over pairwise comparisons of one subtest score with another. Sizable differences between the Verbal and Performance averages (relatively common) can give a distorted picture. An abnormal (not pathological) difference is a finding to be explained, and diverse explanations are possible. A formula for evaluating reliability and abnormality is included. (8 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
The sample consisted of 217 general practitioners from Utah. 80 scores relevant to the performance of these physicians were collected from a variety of scores, intercorrelated, and factor analyzed using the principal components solution based on eigenvalues and eigenvectors. The 30 factors which had an eigenvalue greater than 1.00 were rotated by the varimax procedure and interpreted. The most important finding was the great criterion complexity for this group of physicians. This complexity suggests that one cannot adequately measure physician performance on the basis of a single score or a few scores. Instead, one must obtain a relatively large number of scores. Performance in both premedical and medical education was independent of performance as a physician. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Self-efficacy and mathematics achievement: A study of their relation.   总被引:1,自引:0,他引:1  
In this study, I investigated the relation between self-efficacy and mathematics achievement when other factors, such as self-concept of math ability, prior task achievement, and prior self-efficacy were taken into account. I assessed self-efficacy over 4 trials in a repeated-measures design with 72 children, aged 9–10 years. I assessed task performance after the first and third self-efficacy assessment. Regression analysis indicated small or no predictive relation between self-efficacy and task performance, depending on task familiarity, when these other factors were included in the analysis. Results of the study lend one to doubt that there is a simple relation between self-efficacy and task performance in the field of mathematics learning. The complexity of self-efficacy, its sources, and consequences are also illustrated. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Although Freud considered the rule of free association to be fundamental, he was tentative about the recommendations he made concerning other aspects of analysis. Three quarters of a century later, there is still no formal theory of the working arrangements in treatment, and even the fundamental rule is considered by analytic clinicians to be optional. I portray therapy as a dyadic social system and examine its primary task, boundaries, divisions of labor and authority, and culture in order to weigh the importance of the fundamental rule to task achievement. I find several advantages to making free association a role requirement upon the patient, including, inter alia, the freedom it provides the therapist for relaxed observation, counterassociation, and thought. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Reports on 2 studies using the Esper paradigm to determine development of rule application and discovery capabilities. This paradigm employs both learning and generalization phases. In Exp I with 48 3rd and 4th graders, it was determined that Ss could learn and generalize when rule and structure were provided, but there was little evidence of rule discovery. In Exp II with 48 different 3rd and 4th graders, memory and attention manipulations were added. Both manipulations facilitated learning, but only attention facilitated rule discovery. In both studies 4th graders performed better than 3rd graders on generalization but not learning. The relationship between performance on the Esper and Raven Coloured Progressive Matrices (given to all Ss), although inconsistent, when covaried out removed the significant grade but not experimental effects. (14 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
This is written in response to Ross Stagner's comments (see record 2005-11890-003) concerning the publication of books of readings. First, it is my experience that it is far easier to author a book than edit readings. I don't assume that people who write the original articles that finally find themselves in a book of readings are any more creative than the editors. I don't know how much of a reputation any one gets from authorship or editing a readings book. As for "good solid cash" (to use Stagner's words) I have yet to see some and my experience is not unique. I have paid out a considerable amount of money in secretarial fees alone. If I recoup the money I have expended I will be fortunate. As for so-called profits, if I send one copy of the book to each author and his co-author(s) who contributed an article for a book of readings--well, there goes the "good solid cash." Second, there are many articles that are rescued from the scrap pile by a book of readings. My suggestion is that after the editor of a readings book recoups his expenses in preparing the book, copies of the book be sent to clinics or libraries which are on a limited budget. Copies may even be sent to some of the "underprivileged nations." (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
The time-consuming aspect of the Wechsler intelligence scales has prompted their frequent abbreviation in clinical practice. The use of selected items from each subtest has been a particularly attractive method of shortening because it reduces administration time by about 50% and yet gives scores for each subscale. To test the reliability of scores obtained from this method, 200 protocols of the WAIS were rescored according to short-form procedure and reliabilities based on split-half correlations obtained. It was hypothesized that one could predict the reliability of the shortened WAIS on the basis of the Spearman-Brown formula, and that in testing, as in other fields, "you get what you pay for." Results confirm these hypotheses a short form is not an adequate substitute for the full WAIS. (51 ref.) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Presents counterarguments to J. Chinsky and J. Rappaport's assertion that accurate empathy ratings reflect a quality other than that defined by the scale and to their suggestion that reliability estimates are in general inflated and may be related to the number of therapists being rated in a given study. Research evidence and arguments are presented that demonstrate that (a) the accurate empathy scale tends to measure what theorists and lay people in general think of as understanding vs. not understanding, (b) there is no relationship between the reliability estimates per study and the number of therapists being rated, and (c) the reliability estimates in most of the studies are appropriate and generally accepted as so by competent statisticians. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
We consider the situation in which a learner must induce the rule that explains an observed set of data but the hypothesis space of possible rules is not explicitly enumerated or identified. The first part of the article demonstrates that as long as hypotheses are sparse (i.e., index less than half of the possible entities in the domain) then a positive test strategy is near optimal. The second part of this article then demonstrates that a preference for sparse hypotheses (a sparsity bias) emerges as a natural consequence of the family resemblance principle; that is, it arises from the requirement that good rules index entities that are more similar to one another than they are to entities that do not satisfy the rule. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
16.
As more and more subtests are added to the short form, its validity as determined by McNemar's formula approaches unity, whereas the upper limit to its validity as determined by the corrected formula is the reliability of the Full Scale. This difference seems to correspond to Kaufman's distinction between using the short form as part of the Full Scale and using it as a replacement. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
In his American Psychologist article, Joseph Lerner (see record 1964-01189-001) kindly ascribed to me words which properly belong to Samuel J. Beck. Beck does refer to my Perceptanalysis (Piotrowski & Lewis, 1957), but not on the same page. His words express my past belief. At present my attitude is more complex. It changed after I checked some "blind" Rorschach diagnoses and clinical psychiatric diagnoses on the same patients (Piotrowski, 1950, p. 363), and read published reviews of the reliability and validity of clinical psychiatric diagnoses. These revealed that a considerable percentage of first admission patients, discharged as psychoneurotics, are rediagnosed as schizophrenics after a re-examination several years later. In fact, some schizophrenic conditions escape detection through clinical observations for as long as 10 years, despite intermittent clinical examinations. The Rorschach test definitely is highly sensitive to schizophrenia even though at times some remitted or much improved schizophrenics produce test records failing to give any indication of the psychosis, let alone of the past acute psychotic episodes Lerner stated that "the Rorschach alone is of little assistance unless it is an integral part of the total evaluation." Well, if the Rorschach is never used as an independent diagnostic criterion, we shall never know how good or bad a diagnostic criterion it is. Using it as a part source of information, is to contaminate it (that is why "blind" diagnoses are important). The second point is: It seems advisable to follow the rule that if clinical observations or the Rorschach test--or both--suggest schizophrenia, this diagnosis is likely to be valid. This rule is compatible with Lerner's conclusion that an evaluation based on all available sources of information is better than one which utilizes only one diagnostic criterion, be it test, anamnesis, or clinical examination. To be certain that the Rorschach test is a dependable diagnostic criterion in neuropsychiatry we must have first highly reliable diagnostic test procedures. A digital computer program of Rorschach interpretation, including numerous diagnostic formulae, has been written to achieve objective and perfectly reliable application of the diagnostic test rules to individual cases. The computer program will be submitted to a stringent test of validity. We shall then be in possession of a test which will yield independent and uncontaminated diagnoses. These, in turn, will be available for use independently or as part of a "total evaluation." (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Null hypothesis statistical testing (NHST) has been debated extensively but always successfully defended. The technical merits of NHST are not disputed in this article. The widespread misuse of NHST has created a human factors problem that this article intends to ameliorate. This article describes an integrated, alternative inferential confidence interval approach to testing for statistical difference, equivalence, and indeterminacy that is algebraically equivalent to standard NHST procedures and therefore exacts the same evidential standard. The combined numeric and graphic tests of statistical difference, equivalence, and indeterminacy are designed to avoid common interpretive problems associated with NHST procedures. Multiple comparisons, power, sample size, test reliability, effect size, and cause-effect ratio are discussed. A section on the proper interpretation of confidence intervals is followed by a decision rule summary and caveats. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
The authors present and test a new method of teaching Bayesian reasoning, something about which previous teaching studies reported little success. Based on G. Gigerenzer and U. Hoffrage's (1995) ecological framework, the authors wrote a computerized tutorial program to train people to construct frequency representations (representation training) rather than to insert probabilities into Bayes's rule (rule training). Bayesian computations are simpler to perform with natural frequencies than with probabilities, and there are evolutionary masons for assuming that cognitive algorithms have been developed to deal with natural frequencies. In 2 studies, the authors compared representation training with rule training; the criteria were an immediate learning effect, transfer to new problems, and long-term temporal stability. Rule training was as good in transfer as representation training, but representation training had a higher immediate learning effect and greater temporal stability. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
A formula for estimating the average reliability of a set of rankings, based on n sets, each reduced to standard score form, is presented. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号