首页 | 本学科首页   官方微博 | 高级检索  
     


Assessing reliability: Critical corrections for a critical examination of the Rorschach Comprehensive System.
Authors:Meyer  Gregory J
Abstract:J. M. Wood et al (see records 84-17678 and 84-17679) argued that the Rorschach Comprehensive System (CS) lacked many essential pieces of reliability data and that the available evidence indicated that scoring reliability may be little better than chance. Contrary to their assertions, the author suggests why rater agreement should focus on responses rather than summary scores, how field reliability moves away from testing CS scoring principles, and how no psychometric distinction exists between a percentage correct and a percentage agreement index. Also, after reviewing problematic qualities of kappa, a meta-analysis of published data is presented indicating that the CS has excellent chance-corrected interrater reliability (Estimated κ, M?=?.86, range?=?.72–.96). Finally, the author notes that Wood et al ignored at least 17 CS studies of test-retest reliability that contain many of the important data they said were missing. The author concluded that Wood et al's erroneous assertions about the more elementary topic of reliability make suspect their assertions about the more complex topic of validity. (PsycINFO Database Record (c) 2010 APA, all rights reserved)
Keywords:
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号