首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Investigated the effects (over time) of a comprehensive vs an abbreviated rater training session on relative levels of leniency error and halo effect. 80 undergraduates (20 per group) rated all of their nonlaboratory instructors over 1, 2, or 3 rating periods using either behavioral expectation scales or summated rating scales. Tests on psychometric error were also administered at these times. Results indicate that the psychometric quality of ratings was superior for the group receiving the comprehensive training, and both training groups were superior to the control groups at the 1st measurement period. No differences were found between any groups in later comparisons. A consistent relationship was found between scores on the tests of psychometric error and error as measured on the ratings. Results are discussed in terms of the diminishing effect of rater training over rating periods, the relationship of internal and external criteria of training effects, the practical significance of differences between groups, and the importance of rating context on rating quality. (16 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
A total of 52 supervisory personnel were trained under one of three performance-appraisal training programs: rater error (response set) training, observation training, or decision-making training. Halo, leniency, range restriction, and accuracy measures were collected before and after training from the three training groups and a no-training control group. The results suggested that although the traditional rater error training, best characterized as inappropriate response set training, reduced the classic rater errors (or statistical effects), it also detrimentally affected rating accuracy. However, observation and decision-making training caused performance rating accuracy to increase after training, but did little to reduce classic rater effects. The need for a reconceptualization of rater training content and measurement focus was discussed in terms of the uncertain relation between statistical rating effects and accuracy. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
144 deputy sheriffs were rated on 9 job performance dimensions with 2 rating scales by 2 raters. Results indicate that the rating scales (the Multiple Item Appraisal Form and the Global Dimension Appraisal Form) developed in this study were able to minimize the major problems often associated with performance ratings (i.e., leniency error, restriction of range, and low reliability). A multitrait/multimethod analysis indicated that the rating scales possessed high convergent and discriminant validity. A multitrait/multirater analysis indicated that although the interrater agreement and the degree of rated discrimination on different traits by different raters were good, there was a substantial rater bias, or strong halo effect. This halo effect in the ratings, however, may really be a legitimate general factor rather than an error. (11 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
66 supervisory engineers were randomly assigned to (a) an intense training group, (b) a discussion group, or (c) a nontrained comparison group. The intense training and discussion Ss received 14 hrs of rater training designed to minimize halo and leniency error and to use the organization's behavioral expectation scale for engineers more effectively. A longitudinal research design was used to study the halo and leniency errors 6 mo before training (TB), 6 mo after training (T?), and 12 mo after training (T??). Using covariance analysis, ANOVA with repeated measures, and planned comparisons, findings indicate that the intense training module (which included a 6-hr videotape block) was superior to the discussion and comparison groups in reducing halo and leniency error. However, a noticeable dissipation of training effect on these psychometric errors was identified when the T? and T?? data were examined. (12 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Investigated the effects of perceived purpose for rating and training type on the following dependent variables: accuracy, leniency/severity, and illusory halo. The purpose factor comprised 3 levels: a hiring purpose, a feedback purpose, and a research-only purpose. The training factor comprised 4 levels: rater error (RE) training, frame-of-reference (FOR) training, the combination of both methods, and no training. With both factors crossed, 164 undergraduates were randomly assigned to 1 of 12 conditions and viewed videotapes of lectures given by bogus graduate assistants. Heterogeneity of variance made it necessary to apply a conservative analytical strategy. Training significantly affected 2 measures of accuracy and halo such that a training condition that contained an FOR component did better than RE or no training. The conservativeness of the conservative analytic strategy made effects for the purpose factor on correlation accuracy, leniency/severity, and halo only tentative; it dissipated the 1 interaction effect of the 2 factors on distance accuracy. Discussion centers on (a) comparison of the results with those of S. Zedeck and W. Cascio (see record 1983-09102-001), (b) potential reasons for the heteroscedasticity, and (c) implications for the development of student evaluations of university instructors. (32 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
7.
This study extends multisource feedback research by assessing the effects of rater source and raters' cultural value orientations on rating bias (leniency and halo). Using a motivational perspective of performance appraisal, the authors posit that subordinate raters followed by peers will exhibit more rating bias than superiors. More important, given that multisource feedback systems were premised on low power distance and individualistic cultural assumptions, the authors expect raters' power distance and individualism-collectivism orientations to moderate the effects of rater source on rating bias. Hierarchical linear modeling on data collected from 1,447 superiors, peers, and subordinates who provided developmental feedback to 172 military officers show that (a) subordinates exhibit the most rating leniency, followed by peers and superiors; (b) subordinates demonstrate more halo than superiors and peers, whereas superiors and peers do not differ; (c) the effects of power distance on leniency and halo are strongest for subordinates than for peers and superiors; (d) the effects of collectivism on leniency were stronger for subordinates and peers than for superiors; effects on halo were stronger for subordinates than superiors, but these effects did not differ for subordinates and peers. The present findings highlight the role of raters' cultural values in multisource feedback ratings. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

8.
A comparison of behavioral expectation scales and graphic rating scales.   总被引:1,自引:0,他引:1  
Compared ratings derived from behavioral expectation scales developed by 147 personnel management students with ratings based on graphic rating scales. The ratees were 4 college professors, and the raters were the 183 students in their classes. The behaviorally anchored scales resulted in less halo error, or alternatively, more independence in ratings of different dimensions of performance. The behaviorally anchored scales did not correct for leniency in ratings. These results were observed both among raters who participated in developing the behavioral expectation scales and among similar raters who did not take part in this process. The factor structures of the 2 rating formats were essentially equivalent in "cleanness." Neither solution was judged superior to the other. However, the behavioral expectation scale format possessed greater discriminant validity. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
The traditional assumption has been that halo error is negatively related to accuracy of ratings. W. H. Cooper (1981) evaluated this assumption by examining correlation coefficients between measures of accuracy and halo error from five earlier studies of performance and trait ratings. Because the correlation coefficients were typically positive, Cooper concluded that a "paradoxical" positive relation exists between halo error and accuracy. However, there is no paradox; some of these positive correlation coefficients were between halo error and inaccuracy, whereas others were based on analyses that did not take into consideration negative halo errors. When analyses that correct these problems were performed on two sets of data (R. Tallarigo, 1986, n?=?107; R. J. Vance, K. W. Kuhnert, & J. L. Farr, 1978, n?=?112), all significant (p?negative. The use of halo error measures, the possibility of negative halo errors, and implications of the results for rater training are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
C. E. Lance et al (see record 1994-17452-001) tested 3 different causal models of halo rater error (general impression [GI], salient dimension [SD], and inadequate discrimination [ID] models) and found that the GI model better accounted for observed halo rating error than did the SD or ID models. It was also suggested that the type of halo rater error that occurs might vary as a function of rating context. The purpose of this study was to determine whether rating contexts could be manipulated that favored the operation of each of these 3 halo-error models. Results indicate, however, that GI halo error occurred in spite of experimental conditions designed specifically to induce other forms of halo rater error. This suggests that halo rater error is a unitary phenomenon that should be defined as the influence of a rater's general impression on ratings of specific ratee qualities. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
A note on the statistical correction of halo error.   总被引:1,自引:0,他引:1  
Attempts to eliminate halo error from rating scales by statistical correction have assumed halo to be a systematic error associated with a ratee–rater pair that adds performance-irrelevant variance to ratings. Furthermore, overall performance ratings have been assumed to reflect this bias. Consideration of the source of halo error, however, raises the possibility that the cognitive processes resulting in halo also mediate expectations of and interactions with employees, indirectly influencing true performance and ability via instruction, feedback, and reinforcement. If so, it would not be possible to correct for halo error using overall performance ratings. (26 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Meta-analysis was used to determine the relationship between rater error measures and measures of rating accuracy. Data from 10 studies (N?=?1,096) were used to estimate correlations between measures of halo, leniency, and range restriction and L. J. Cronbach's (1955) four measures of accuracy. The average correlation between error and accuracy was .05. No moderators of the error–accuracy relationship were found. Furthermore, the data are not consistent with the hypothesis that error measures are sometimes valid indicators of accuracy. The average value of the 90th percentile of the distribution of correlations (corrected for attenuation and range restriction) was .11. The use of rater error measures as indirect indicators of accuracy is not recommended. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Tested C. E. Schneier's (see record 1978-11450-001) cognitive compatibility theory. In Exps I and II, 100 undergraduates rated college instructors and professor vignettes, respectively. Results show that rater cognitive complexity was unrelated to rating accuracy, halo error, acceptability of rating format, or confidence in ratings. In Exp III, 31 police sergeants rated patrol officers, and the results show that halo error and acceptability of formats were unrelated to cognitive complexity. In Exp IV, 95 undergraduates' ratings of managerial performance and instructor effectiveness showed no support for the cognitive compatibility theory. However, the data showed that raters' ability to generate dimensions was significantly related to halo error in instructors' ratings. Implications for cognitive compatibility theory and future research with the method of generating performance dimensions are discussed. (30 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Compared Smith-Kendall type behaviorally anchored scales for derived performance dimensions (Format 1), scales for the same dimensions but without the behavioral anchors (Format 2), and scales for dimensions selected on an a priori basis (Format 3) on the basis of susceptibility to rater response biases. Raters were 30 graduate students and ratees were 3 associate professors whom the raters had had in succession during their 1st year of graduate study. Leniency error and composite halo error were present in all ratings; there was no evidence of relative or absolute halo errors in any ratings. There was some evidence that the use of scales for derived dimensions reduced leniency error and increased the amount of variance attributable to ratee differences. The scale reliabilities of the 3 formats were also determined. A discussion of the feasibility of obtaining relatively independent scales for several job performance dimensions is included. (15 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
To date, extant research has not established how rater training affects the accuracy of data yielded from Direct Behavior Rating (DBR) methods. The purpose of the current study was to examine whether providing users of DBR methods with a training session that utilized practice and performance feedback would increase rating accuracy. It was hypothesized that exposure to direct training procedures would result in greater accuracy than exposure to a brief familiarization training session. Results were consistent with initial hypotheses in that ratings conducted by trained participants were more accurate than those conducted by the untrained participants. Implications for future practice and research are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Studied 52 nursing supervisors to examine the effects of a lecture on rating errors, discussion about errors, and participation in scale construction on both experimental and subsequent administrative ratings. On experimental ratings, scale construction reduced halo and variability errors, lecture reduced variability errors, and discussion increased variability errors. These results held true only for raters who began making ratings within 1 wk after training, before administration of questionnaires designed to measure rater motivation and knowledge. (27 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
In several social perception studies investigators have concluded that raters' semantic conceptual similarity schemata serve to guide and constrain dimensional covariance in the rating judgment process. This effect has been hypothesized to be most likely when ratings are memory based and raters lack relevant job or ratee information. Recent research that has explored the effects of conceptual similarity schemata on performance ratings and halo error has provided some limited support for this systematic distortion hypothesis (SDH). However, these studies are limited because researchers have examined this phenomenon using group-level analyses, whereas the theory references individual-level judgment processes. The present study investigated the phenomena at the individual level. The effects of varying levels of rater job knowledge (high, medium, and low) and familiarity with ratees (high and low) were examined for conceptual similarity–rating and rating–true-score covariation relations, for measures of halo, and for rating accuracy components. Results provided support for the SDH, but indicated a boundary condition for its operation and revealed some surprising findings for individual-level rater halo. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Assessed the cognitive complexity of 96 undergraduates with the group version of the Role Construct Repertory (REP) Test, a factor analysis of REP test data, and a sorting task. Performance ratings for 3 of the Ss' instructors were obtained with behaviorally anchored rating scales, mixed standard rating scales, graphic rating scales, and simple "alternate" 3-point rating scales. No differences in leniency, halo, or range restriction emerged either as a function of raters' cognitive complexity or a Cognitive Complexity?×?Scale Format interaction. Raters' confidence in their ratings was not associated with either cognitive complexity or rating scale format. It is concluded that researchers of performance ratings should exercise restraint before confidently conferring moderator variable status on a cognitive complexity construct. (25 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
130 undergraduates rated 33 paragraphs describing the performance of supermarket checkers for one of the following purposes: merit raise, development, or retention. The paragraphs were assembled using previously scaled behavioral anchors describing 5 dimensions of performance. The authors conclude that (a) purpose of the rating was a more important variable in explaining the overall variability in ratings than was rater training; (b) training raters to evaluate for some purposes led to more accurate evaluations than training for other purposes; and (c) rater strategy varied with purpose of the rating (i.e., identical dimensions were weighed, combined, and integrated differently as a function of purpose). (24 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
The effects of cognitive categorization of raters on accuracy, leniency, and halo of performance evaluations were investigated in a field setting. One hundered seventy-four subordinates evaluated the performance of their managers on three performance dimensions. Managers were categorized as congruent or incongruent based on subordinates' perceptions of the extent to which the manager's behavior met the subordinates' expectations. The results indicated that the quality of ratings assigned by subordinates was related to the cognitive categories used. As hypothesized, ratings of managers who were categorized as congruent were found to be more accurate and also to contain more leniency and halo tendency than the ratings of managers who were categorized as incongruent. Implications of these findings for performance-appraisal research are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号