首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Considered the effects of frame-of-reference (FOR) training on raters' ability to correctly classify ratee performance as well as their ability to recognize previously observed behaviors. The purpose was to examine the cognitive changes associated with FOR training to better understand why such training generally improves rating accuracy. 93 college students (mean age 22 yrs) trained using either FOR or control procedures, observed 3 managers on videotape, and rated the managers on 3 performance dimensions. Results supported the hypothesis that, compared with control training, FOR training led to better rating accuracy and better classification accuracy. Also consistent with predictions, FOR training resulted in lower decision criteria (i.e., higher bias) and lower behavioral accuracy on a recognition memory task involving impression-consistent behaviors. The implications of these results are discussed, particularly in terms of the ability of FOR-trained raters to provide accurate performance feedback to ratees. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
In this paper, we critically examine previous research on rating formats and rater training in the context of performance appraisal. Historically, the goal of this body of research has been to search for ways of maximizing the psychometric quality of performance evaluation data. Our central thesis is that there are a number of avenues for broadening this research. Accordingly, we propose a conceptual model that hopefully serves as a conceptual framework for future work in these 2 traditional performance appraisal research streams. For example, both rating formats and rater training research may be useful for facilitating and improving the feedback and employee development process, as well as reducing the potential existence of rater biases. In addition, format and training research may focus upon ways of enhancing both rater and ratee reactions to the appraisal system. A key feature of our model is the integration of national culture as a moderator of the relations between specific formats, training programs, and various outcomes. We consider both the national culture of raters and ratees, and focus specifically on comparisons between Western and East Asian cultures. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
This study extends multisource feedback research by assessing the effects of rater source and raters' cultural value orientations on rating bias (leniency and halo). Using a motivational perspective of performance appraisal, the authors posit that subordinate raters followed by peers will exhibit more rating bias than superiors. More important, given that multisource feedback systems were premised on low power distance and individualistic cultural assumptions, the authors expect raters' power distance and individualism-collectivism orientations to moderate the effects of rater source on rating bias. Hierarchical linear modeling on data collected from 1,447 superiors, peers, and subordinates who provided developmental feedback to 172 military officers show that (a) subordinates exhibit the most rating leniency, followed by peers and superiors; (b) subordinates demonstrate more halo than superiors and peers, whereas superiors and peers do not differ; (c) the effects of power distance on leniency and halo are strongest for subordinates than for peers and superiors; (d) the effects of collectivism on leniency were stronger for subordinates and peers than for superiors; effects on halo were stronger for subordinates than superiors, but these effects did not differ for subordinates and peers. The present findings highlight the role of raters' cultural values in multisource feedback ratings. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

4.
In several social perception studies investigators have concluded that raters' semantic conceptual similarity schemata serve to guide and constrain dimensional covariance in the rating judgment process. This effect has been hypothesized to be most likely when ratings are memory based and raters lack relevant job or ratee information. Recent research that has explored the effects of conceptual similarity schemata on performance ratings and halo error has provided some limited support for this systematic distortion hypothesis (SDH). However, these studies are limited because researchers have examined this phenomenon using group-level analyses, whereas the theory references individual-level judgment processes. The present study investigated the phenomena at the individual level. The effects of varying levels of rater job knowledge (high, medium, and low) and familiarity with ratees (high and low) were examined for conceptual similarity–rating and rating–true-score covariation relations, for measures of halo, and for rating accuracy components. Results provided support for the SDH, but indicated a boundary condition for its operation and revealed some surprising findings for individual-level rater halo. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
This article describes how the frame of reference (FOR) approach to rater training for performance appraisal purposes (H. J. Bernardin, 1979; H. J. Bernardin & M. R. Buckley, 1981) was applied to traditional assessment center ratings and rater training. The method by which an FOR was established for the assessment center ratings is presented, including (a) definitions of dimensions of performance, (b) definitions of qualitative levels of performance within each dimension, and (c) specific behavioral examples of levels of performance on an item-by-item basis within dimensions. The resulting FOR was used to structure the training and certification of raters with the expectation of minimizing sources of rater unreliability. Implications for assessment center reliability, validity, and employee perceptions are also discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Tested C. E. Schneier's (see record 1978-11450-001) cognitive compatibility theory. In Exps I and II, 100 undergraduates rated college instructors and professor vignettes, respectively. Results show that rater cognitive complexity was unrelated to rating accuracy, halo error, acceptability of rating format, or confidence in ratings. In Exp III, 31 police sergeants rated patrol officers, and the results show that halo error and acceptability of formats were unrelated to cognitive complexity. In Exp IV, 95 undergraduates' ratings of managerial performance and instructor effectiveness showed no support for the cognitive compatibility theory. However, the data showed that raters' ability to generate dimensions was significantly related to halo error in instructors' ratings. Implications for cognitive compatibility theory and future research with the method of generating performance dimensions are discussed. (30 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Bias in observer ratings compromises generalizability of measurement, typically resulting in attenuation of observed associations between variables. This quantitative review of 79 generalizability studies including raters as a facet examines bias in observer ratings in published psychological research and identifies properties of rating systems likely to place them at risk for problems with rater bias. For the rating systems studied, an average of 37% of score variance was attributable to 2 types of rater bias: (a) raters' differential interpretations of the rating scale and (b) their differential evaluations of the same targets. Ratings of explicit attributes (e.g., frequency counts) contained negligible bias variance, whereas ratings of attributes requiring rater inference contained substantial bias variance. Rater training ameliorated but did not solve the problem of bias in inferential rating scales. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Extends research on the cognitive mechanisms underlying frame-of-reference (FOR) rater training by examining the impact of FOR training on the recall of performance information. It was hypothesized that the shared performance schema fostered by FOR training would serve as the basis for information processing, resulting in better recall for behavioral performance information as well as more accurate ratings of individual ratees. 174 FOR-trained Ss produced more accurate performance ratings, as measured by L. Cronbach's (1955) differential accuracy and differential elevation components, than did 142 control-trained Ss. FOR-trained Ss also recalled more behaviors, representing more performance dimensions, and exhibited less evaluative clustering and a larger relationship between memory and judgment. No differences were found between control and FOR Ss on measures of recognition accuracy. Implications for the evaluative judgment process are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
A total of 52 supervisory personnel were trained under one of three performance-appraisal training programs: rater error (response set) training, observation training, or decision-making training. Halo, leniency, range restriction, and accuracy measures were collected before and after training from the three training groups and a no-training control group. The results suggested that although the traditional rater error training, best characterized as inappropriate response set training, reduced the classic rater errors (or statistical effects), it also detrimentally affected rating accuracy. However, observation and decision-making training caused performance rating accuracy to increase after training, but did little to reduce classic rater effects. The need for a reconceptualization of rater training content and measurement focus was discussed in terms of the uncertain relation between statistical rating effects and accuracy. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
108 undergraduates were randomly assigned to 1 of 4 experimental groups to rate videotaped performances of several managers talking with a problem subordinate. The research employed a single-factor experimental design in which rater error training (RET), rater accuracy training (RAT), rating error and accuracy training (RET/RAT), and no training were compared for 2 rating errors (halo and leniency) and accuracy of performance evaluations. Differences in program effectiveness for various performance dimensions were also assessed. Results show that RAT yielded the most accurate ratings and no-training the least accurate ratings. The presence of error training (RET or RET/RAT) was associated with reduced halo, but the presence of accuracy training (RAT or RET/RAT) was associated with less leniency. Dimensions?×?Training interactions revealed that training was not uniformly effective across the rating dimensions. (23 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
To date, extant research has not established how rater training affects the accuracy of data yielded from Direct Behavior Rating (DBR) methods. The purpose of the current study was to examine whether providing users of DBR methods with a training session that utilized practice and performance feedback would increase rating accuracy. It was hypothesized that exposure to direct training procedures would result in greater accuracy than exposure to a brief familiarization training session. Results were consistent with initial hypotheses in that ratings conducted by trained participants were more accurate than those conducted by the untrained participants. Implications for future practice and research are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
It was hypothesized that encoding conditions would substitute for, or neutralize, the effects of frame-of-reference (FOR) training on rating accuracy by encouraging or impeding the person organization of behavior in memory. Undergraduates (N?=?121) were trained with FOR or control procedures, observed videotaped manager performance in a blocked or a mixed order, rated the managers on 3 performance dimensions, and free-recalled target performance vignettes. FOR training and blocked information improved rating accuracy and led to person-based recall; however, person organization was uncorrelated with accuracy. Results are discussed in terms of R. S. Wyer and T. K. Srull's (1989) model of person memory and judgment from which it is proposed that memory organization for behaviors may be unnecessary for rating accuracy. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
The authors developed a source-monitoring procedure to reduce the biasing effects of rater expectations on behavioral measurement. Study participants (N = 224) were given positive or negative information regarding the performance of a group and, after observing the group, were assigned to a source-monitoring or control condition. Raters in the source-monitoring condition were instructed to report only behaviors that evoked detailed memories (remember judgments) and to avoid reporting behaviors based on feelings of familiarity (know judgments). Results revealed that controlling raters' response strategy reduced (and often eliminated) the biasing effects of performance expectations. These findings advance our understanding of the performance-cue bias and offer a potentially useful technique for decreasing rater bias. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
The results of numerous social perception studies have led researchers to conclude that raters' implicit cognitive schemata regarding trait and behavior covariance may play a crucial role in the rating judgment process. W. H. Cooper (see PA, Vol 66:9176 and 9262) proposed one such cognitive schema, semantic conceptual similarity, as a key source of halo error in job performance ratings but was unable to reproduce the results of previous social perception research. The present study, with 186 undergraduates, employed baseball players as target ratees to examine the effects of job and ratee knowledge on the relations of raters' conceptual similarity schemata with rating and true score covariance. The results are consistent with the systematic distortion hypothesis presented by R. A. Shweder (see record 1976-07240-001). The association between conceptual similarity and rating covariance was significantly greater when Ss lacked sufficient job and/or ratee knowledge. Moreover, the degree of halo was also significantly greater when Ss lacked relevant job and ratee knowledge. The advantages of using objective measures of actual performance as true score estimates in the study of rater cognitive processes are discussed. (30 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
16.
The purpose of this study was to test whether a multisource performance appraisal instrument exhibited measurement invariance across different groups of raters. Multiple-groups confirmatory factor analysis as well as item response theory (IRT) techniques were used to test for invariance of the rating instrument across self, peer, supervisor, and subordinate raters. The results of the confirmatory factor analysis indicated that the rating instrument was invariant across these rater groups. The IRT analysis yielded some evidence of differential item and test functioning, but it was limited to the effects of just 3 items and was trivial in magnitude. Taken together, the results suggest that the rating instrument could be regarded as invariant across the rater groups, thus supporting the practice of directly comparing their ratings. Implications for research and practice are discussed, as well as for understanding the meaning of between-source rating discrepancies. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
130 undergraduates rated 33 paragraphs describing the performance of supermarket checkers for one of the following purposes: merit raise, development, or retention. The paragraphs were assembled using previously scaled behavioral anchors describing 5 dimensions of performance. The authors conclude that (a) purpose of the rating was a more important variable in explaining the overall variability in ratings than was rater training; (b) training raters to evaluate for some purposes led to more accurate evaluations than training for other purposes; and (c) rater strategy varied with purpose of the rating (i.e., identical dimensions were weighed, combined, and integrated differently as a function of purpose). (24 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
The purpose of this study was to test competing theories regarding the relationship between true halo (actual dimensional correlations) and halo rater error (effects of raters' general impressions on specific ratee qualities) at both the individual and group level of analysis. Consistent with the prevailing general impression model of halo rater error, results at both the individual and group level analyses indicated a null (vs. positive or negative) true halo-halo rater error relationship. Results support the ideas that (a) the influence of raters' general impressions is homogeneous across rating dimensions despite wide variability in levels of true halo; (b) in assigning ratings, raters rely both on recalled observations of actual ratee behaviors and on general impressions of ratees in assigning dimensional ratings; and (c) these 2 processes occur independently of one another. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Investigated the effects of perceived purpose for rating and training type on the following dependent variables: accuracy, leniency/severity, and illusory halo. The purpose factor comprised 3 levels: a hiring purpose, a feedback purpose, and a research-only purpose. The training factor comprised 4 levels: rater error (RE) training, frame-of-reference (FOR) training, the combination of both methods, and no training. With both factors crossed, 164 undergraduates were randomly assigned to 1 of 12 conditions and viewed videotapes of lectures given by bogus graduate assistants. Heterogeneity of variance made it necessary to apply a conservative analytical strategy. Training significantly affected 2 measures of accuracy and halo such that a training condition that contained an FOR component did better than RE or no training. The conservativeness of the conservative analytic strategy made effects for the purpose factor on correlation accuracy, leniency/severity, and halo only tentative; it dissipated the 1 interaction effect of the 2 factors on distance accuracy. Discussion centers on (a) comparison of the results with those of S. Zedeck and W. Cascio (see record 1983-09102-001), (b) potential reasons for the heteroscedasticity, and (c) implications for the development of student evaluations of university instructors. (32 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号