首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
108 undergraduates were randomly assigned to 1 of 4 experimental groups to rate videotaped performances of several managers talking with a problem subordinate. The research employed a single-factor experimental design in which rater error training (RET), rater accuracy training (RAT), rating error and accuracy training (RET/RAT), and no training were compared for 2 rating errors (halo and leniency) and accuracy of performance evaluations. Differences in program effectiveness for various performance dimensions were also assessed. Results show that RAT yielded the most accurate ratings and no-training the least accurate ratings. The presence of error training (RET or RET/RAT) was associated with reduced halo, but the presence of accuracy training (RAT or RET/RAT) was associated with less leniency. Dimensions?×?Training interactions revealed that training was not uniformly effective across the rating dimensions. (23 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
3.
Tested C. E. Schneier's (see record 1978-11450-001) cognitive compatibility theory. In Exps I and II, 100 undergraduates rated college instructors and professor vignettes, respectively. Results show that rater cognitive complexity was unrelated to rating accuracy, halo error, acceptability of rating format, or confidence in ratings. In Exp III, 31 police sergeants rated patrol officers, and the results show that halo error and acceptability of formats were unrelated to cognitive complexity. In Exp IV, 95 undergraduates' ratings of managerial performance and instructor effectiveness showed no support for the cognitive compatibility theory. However, the data showed that raters' ability to generate dimensions was significantly related to halo error in instructors' ratings. Implications for cognitive compatibility theory and future research with the method of generating performance dimensions are discussed. (30 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
A total of 52 supervisory personnel were trained under one of three performance-appraisal training programs: rater error (response set) training, observation training, or decision-making training. Halo, leniency, range restriction, and accuracy measures were collected before and after training from the three training groups and a no-training control group. The results suggested that although the traditional rater error training, best characterized as inappropriate response set training, reduced the classic rater errors (or statistical effects), it also detrimentally affected rating accuracy. However, observation and decision-making training caused performance rating accuracy to increase after training, but did little to reduce classic rater effects. The need for a reconceptualization of rater training content and measurement focus was discussed in terms of the uncertain relation between statistical rating effects and accuracy. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Assigned 60 managers in a large corporation to a workshop, a group discussion, or a control group. The workshop and group discussion involved training directed toward the elimination of rating errors that occur in performance appraisal and selection interviews (i.e., contrast effects, halo effect, similarity, and first impressions.) 6 mo after the training, Ss rated hypothetical candidates who were observed on videotape. Results show that (a) trainees in the control group committed similarity, contrast, and halo errors; (b) trainees in the group discussion committed impression errors; and (c) trainees in the workshop committed none of the errors. The importance of observer training for minimizing the "criterion problem" in industrial psychology is discussed. (19 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Compared 2 models of the cognitive processes underlying performance ratings: a traditional model outlined by W. C. Borman (see record 1980-26801-001), and a cognitive categorization model proposed by J. M. Feldman (see record 1981-24524-001). To distinguish these 2 models, 120 college students were shown 1 of 2 videotapes of a college lecturer in which 3 of 5 dimensions of performance were manipulated to be opposite to that of the lecturer's overall performance. Ratings were made either immediately after viewing the videotape or 2 days later. Results indicate that the traditional model was appropriate for describing the rating process in both the immediate and the delayed rating conditions. However, a large halo effect was also found that was consistent with the categorization model despite conditions designed to minimize the likelihood of halo. Additional effects of cognitive categorization included a tendency to make errors in later recall of lecturing incidents consistent with Ss' general impression. (48 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Performance ratings of 294 clerical workers in a validation study of clerical ability tests indicated that halo, measured as the standard deviation across dimensions, consistently moderated the relationships between dimension ratings and scores on valid tests. Greater halo resulted in higher validity coefficients, and also was related to higher performance ratings. In an additional analysis, statistically controlling for the effect of the overall rating on dimension ratings resulted in poorer validation results, with dimension ratings rarely adding additional variance to that of overall ratings. The results of this study contradict the traditionally held view of halo as a rating "error," and are consistent with recent laboratory studies that have found accuracy and halo positively related. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Investigated the effects of perceived purpose for rating and training type on the following dependent variables: accuracy, leniency/severity, and illusory halo. The purpose factor comprised 3 levels: a hiring purpose, a feedback purpose, and a research-only purpose. The training factor comprised 4 levels: rater error (RE) training, frame-of-reference (FOR) training, the combination of both methods, and no training. With both factors crossed, 164 undergraduates were randomly assigned to 1 of 12 conditions and viewed videotapes of lectures given by bogus graduate assistants. Heterogeneity of variance made it necessary to apply a conservative analytical strategy. Training significantly affected 2 measures of accuracy and halo such that a training condition that contained an FOR component did better than RE or no training. The conservativeness of the conservative analytic strategy made effects for the purpose factor on correlation accuracy, leniency/severity, and halo only tentative; it dissipated the 1 interaction effect of the 2 factors on distance accuracy. Discussion centers on (a) comparison of the results with those of S. Zedeck and W. Cascio (see record 1983-09102-001), (b) potential reasons for the heteroscedasticity, and (c) implications for the development of student evaluations of university instructors. (32 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Frame-of-reference (FOR) rater training is one technique used to impart a theory of work performance to raters. In this study, the authors explored how raters' implicit performance theories may differ from a normative performance theory taught during training. The authors examined how raters' level and type of idiosyncrasy predicts their rating accuracy and found that rater idiosyncrasy negatively predicts rating accuracy. Moreover, although FOR training may improve rating accuracy even for trainees with lower performance theory idiosyncrasy, it may be more effective in improving errors of omission than commission. The discussion focuses on the roles of idiosyncrasy in FOR training and the implications of this research for future FOR research and practice. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
The different conceptual and operational definitions of halo are reviewed, and problems when using halo as a dependent measure in performance rating research and practice are pointed out. Four major points are emphasized: (1) There is no agreed on conceptual definition of halo; (2) the different conceptual definitions of halo are not systematically related to different operational definitions (i.e., measures) of halo; (3) halo measures may be poor indexes of rating quality in that different halo measures are not strongly interrelated and halo measures are not related to measures of rating validity or accuracy; and (4) although halo may be a poor measure of rating quality, it may or may not be an important measure of the rating process. The utility of assessing halo to determine the psychometric quality of rating data is questioned. Halo may be more appropriately used as a measure to study cognitive processing, rather than as a measure of performance rating outcome. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
The traditional assumption has been that halo error is negatively related to accuracy of ratings. W. H. Cooper (1981) evaluated this assumption by examining correlation coefficients between measures of accuracy and halo error from five earlier studies of performance and trait ratings. Because the correlation coefficients were typically positive, Cooper concluded that a "paradoxical" positive relation exists between halo error and accuracy. However, there is no paradox; some of these positive correlation coefficients were between halo error and inaccuracy, whereas others were based on analyses that did not take into consideration negative halo errors. When analyses that correct these problems were performed on two sets of data (R. Tallarigo, 1986, n?=?107; R. J. Vance, K. W. Kuhnert, & J. L. Farr, 1978, n?=?112), all significant (p?negative. The use of halo error measures, the possibility of negative halo errors, and implications of the results for rater training are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Studied 52 nursing supervisors to examine the effects of a lecture on rating errors, discussion about errors, and participation in scale construction on both experimental and subsequent administrative ratings. On experimental ratings, scale construction reduced halo and variability errors, lecture reduced variability errors, and discussion increased variability errors. These results held true only for raters who began making ratings within 1 wk after training, before administration of questionnaires designed to measure rater motivation and knowledge. (27 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Investigated the effects (over time) of a comprehensive vs an abbreviated rater training session on relative levels of leniency error and halo effect. 80 undergraduates (20 per group) rated all of their nonlaboratory instructors over 1, 2, or 3 rating periods using either behavioral expectation scales or summated rating scales. Tests on psychometric error were also administered at these times. Results indicate that the psychometric quality of ratings was superior for the group receiving the comprehensive training, and both training groups were superior to the control groups at the 1st measurement period. No differences were found between any groups in later comparisons. A consistent relationship was found between scores on the tests of psychometric error and error as measured on the ratings. Results are discussed in terms of the diminishing effect of rater training over rating periods, the relationship of internal and external criteria of training effects, the practical significance of differences between groups, and the importance of rating context on rating quality. (16 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Rater bias in performance ratings: Superior, self-, and peer ratings.   总被引:1,自引:0,他引:1  
Leniency errors, halo effects, and differential dimensionality were explored in an analysis of superior, self-, and peer performance ratings of 107 managerial and 76 professional employees in a medium-sized manufacturing location, representing 95% of the managerial and professional staff. Self-ratings showed greater leniency effects than superior or peer ratings. A multitrait–multimethod (MTMM) analysis supported the presence of strong halo effect and significant convergent validity but not discriminant validity. The dimensional analysis supported the presence of strong halo effect. A statistical control procedure for the halo effect was developed that involved calculating residuals of the performance items, controlling for the "overall effectiveness" variance component in each item. The procedure did not reduce the significant halo effect, nor did it improve the nonsignificant discriminant validity in the MTMM analysis. It did, however, clarify the dimensional structure of ratings by superiors. Data from 4 previously published studies were also reanalyzed using the statistical control procedure. (19 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Criticizes an article by F. Landy et al (see record 1981-00274-001), which assumed that a general factor present in the intercorrelations of ratings of performance was generated by halo rating errors. A number of decision rules were used to generate an analytic procedure for extracting halo errors, correcting ratings, and interpreting factors present in the corrected correlations. A number of the decision rules, as well as the initial assumption, seem to be rules of thumb or to depend on implicit theories of ratings and not on empirical data. Reanalysis of the rating data to allow both general 2nd- and 1st-order factors to be expressed in terms of item loadings recovered the structure present in the correlation of the original ratings as well as the psychological meanings of the 1st-order factors. General factors in rating data resemble general factors in measures of human ability. It is argued that removing general factors as if they were halo rather than true score may eliminate more of the variance from rating data than is justifiable. (8 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Investigated the effects of a short training session designed to reduce halo error in performance ratings. 90 low and middle managers rated 1 of 6 hypothetical first-line supervisors on 6 performance dimensions according to behavior displayed in a prepared vignette. Ratings were taken prior to and following the 5-min training session, with rater-ratee combinations counterbalanced. The vignettes were developed to contain previously scaled behavior examples, thus enabling the calculation of "true" criterion scores for each dimension. Comparisons between these "true" criterion and the performance ratings revealed that the training session significantly reduced halo, while leaving validity of the ratings generally unaffected. Performance ratings completed after training possessed lower reliability, although raters provided somewhat more accurate performance profiles. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
In several social perception studies investigators have concluded that raters' semantic conceptual similarity schemata serve to guide and constrain dimensional covariance in the rating judgment process. This effect has been hypothesized to be most likely when ratings are memory based and raters lack relevant job or ratee information. Recent research that has explored the effects of conceptual similarity schemata on performance ratings and halo error has provided some limited support for this systematic distortion hypothesis (SDH). However, these studies are limited because researchers have examined this phenomenon using group-level analyses, whereas the theory references individual-level judgment processes. The present study investigated the phenomena at the individual level. The effects of varying levels of rater job knowledge (high, medium, and low) and familiarity with ratees (high and low) were examined for conceptual similarity–rating and rating–true-score covariation relations, for measures of halo, and for rating accuracy components. Results provided support for the SDH, but indicated a boundary condition for its operation and revealed some surprising findings for individual-level rater halo. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Undergraduate Ss possessing normative or idiosyncratic rating standards were given frame-of-reference training, rater-error training, training that controlled for structural similarities between frame-of-reference training and rater-error training, or null control training. Hypothesized pretest differences that normative raters are more accurate than idiosyncratic raters were not found. However, when data were collapsed across rating aptitude, different trainings were found to improve different measures of accuracy. Frame-of-reference trainees were most accurate on stereotype accuracy and differential accuracy, rater-error trainees were most accurate on elevation, and all groups improved on differential elevation. Results are discussed in relation to the role of rater aptitude in frame-of-reference training and the future of rater-training programs. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
18 psychiatric nursing staff members (mean age 32 yrs) participated in an experimental training study to test the effectiveness of a brief microtraining instructional format against a traditional discussion training format. Results indicate that both microtraining and discussion treatments produced improved in-vivo performance of verbal and nonverbal social-approval skills, but microtraining treatment resulted in significantly greater in-vivo use of both verbal and nonverbal social-approval skills at posttreatment and a 5-wk follow-up. No differences in skill comprehension were evident across the 2 training treatments. (French abstract) (21 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
We use levels-of-processing theory and social facilitation theory to explain the effect of training format and group size on distance and correlation accuracy, leniency-severity, halo, retention of training and pretraining information, and subject arousal. The training factor included frame-of-reference (FOR) training, information only (INFO) training, and no training (NOT). Group size was n?=?1, n?=?6, and n?=?12, respectively. A total of 108 subjects, randomly assigned to one of nine Training?×?Group Size conditions, viewed and rated videotaped lectures. Results indicated that FOR training effected improved retention of training information, improved distance accuracy, and less halo over INFO training or NOT (p  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号