共查询到20条相似文献,搜索用时 0 毫秒
1.
A simplified percentile profile chart, designed for presenting test results to supervisors, is described and illustrated. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
2.
Are the 5 forms of the Wonderlic Personnel Test really equivalent? "Sixteen groups consisting of 590 male applicants for apprenticeship programs in a large manufacturing company were tested using all 5 forms of the Wonderlic Personnel Test (Forms A, B, D, E and F… . it is recommended that Form B of the Personnel Test not be regarded as directly equivalent to any of the other four forms of the test and that Form D not be regarded as directly equivalent to Form F in industrial testing situations similar to the one in the present study." (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
3.
Sackett Paul R.; Borneman Matthew J.; Connelly Brian S. 《Canadian Metallurgical Quarterly》2009,64(4):285
We are pleased that our article (see record 2008-05553-001) prompted this series of four commentaries and that we have this opportunity to respond. We address each in turn. Duckworth (see record 2009-06923-012) and Kaufman and Agars (see record 2009-06923-013) discussed, respectively, two broad issues concerning the validity of selection systems, namely, the expansion of the predictor domain to include noncognitive predictors of performance and the expansion of the criterion domain to include additional criteria (e.g., creativity). We agree with these arguments, noting that they expand on points made in our original article. Wicherts and Millsap (see record 2009-06923-014) rightly noted the distinction between measurement bias and predictive bias and the fact that a finding of no predictive bias does not rule out the possibility that measurement bias still exists. They took issue with a statement we cited from Cullen, Hardison, and Sackett (2004) that if motivational mechanisms, such as stereotype threat, result in minority group members obtaining lower observed scores than true scores (i.e., a form of measurement bias), then the performance of minority group members should be under predicted. Our characterization of Cullen et al.’s (2004) statement was too cryptic; what was intended was a statement to the effect that if the regression lines for majority and minority groups are identical at the level of true predictor scores, then a biasing factor resulting in lower observed scores than true scores for minority group members would shift the minority group regression line to result in under prediction for that group. We do agree with Helms’s (see record 2009-06923-015) call for studying the reasons why racial- group differences are found and encourage this line of research; however, we view the study of racial-group differences and the study of determinants of those differences as complementary. We thank the authors for contributing these commentaries and for stimulating this discussion. Duckworth (2009) and Kaufman and Agars (2009) discussed important issues regarding expanding the predictor and criterion domains. Wicherts and Millsap (2009) correctly noted distinctions between predictive and measurement bias and used stereotype threat as a mechanism to discuss these issues. Helms (2009) raised several issues regarding the validity and fairness of standardized tests. In all cases, we welcomed the opportunity to discuss these topics and provide more detail on issues relating to high-stakes standardized testing. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
4.
This study examined how specific features of adaptive tests are related to test takers' reactions. Participants took a computer-adaptive test in which 2 features, difficulty of the initial item and difficulty of subsequent items, were manipulated, then responded to questionnaires assessing their reactions to the test. The data show that the relationship between a test's objective difficulty, which was determined by the 2 manipulated test characteristics, and reactions was fully mediated by perceived performance. Additional analyses evaluated the impact of feedback on reactions to the adaptive test. In general, feedback that was consistent with perceptions of performance was positively related to reactions. The results suggest that minor changes to the design of an adaptive test may potentially enhance examinees' reactions, (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
5.
Psychological test publishers have longstanding policies prohibiting the sale of psychological tests to unqualified persons. Psychologists are also ethically bound to maintain test security. The emergence of Internet auction sites, however, poses a heretofore unrecognized threat to test security. This 3-month survey of auction listings on eBay found that both personality and intelligence tests, including test manuals, are available, and that sales to unqualified persons are not always restricted. These findings highlight a need to strengthen the ethics code to specify standards for the disposal of unwanted test material. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
6.
Barsevick Andrea M.; Montgomery Susan V.; Ruth Karen; Ross Eric A.; Egleston Brian L.; Bingler Ruth; Malick John; Miller Suzanne M.; Cescon Terrence P.; Daly Mary B. 《Canadian Metallurgical Quarterly》2008,22(2):303
Guided by the theory of planned behavior, this analysis explores the communication skills of women who had genetic testing for BRCA1 and BRCA2. The key outcome was intention to tell test results to adult first-degree relatives. The theory predicts that global and specific attitudes, global and specific perceived social norms, and perceived control will influence the communication of genetic test results. A logistic regression model revealed that global attitude (p 相似文献
7.
Aguinis Herman; Culpepper Steven A.; Pierce Charles A. 《Canadian Metallurgical Quarterly》2010,95(4):648
We developed a new analytic proof and conducted Monte Carlo simulations to assess the effects of methodological and statistical artifacts on the relative accuracy of intercept- and slope-based test bias assessment. The main simulation design included 3,185,000 unique combinations of a wide range of values for true intercept- and slope-based test bias, total sample size, proportion of minority group sample size to total sample size, predictor (i.e., preemployment test scores) and criterion (i.e., job performance) reliability, predictor range restriction, correlation between predictor scores and the dummy-coded grouping variable (e.g., ethnicity), and mean difference between predictor scores across groups. Results based on 15 billion 925 million individual samples of scores and more than 8 trillion 662 million individual scores raise questions about the established conclusion that test bias in preemployment testing is nonexistent and, if it exists, it only occurs regarding intercept-based differences that favor minority group members. Because of the prominence of test fairness in the popular media, legislation, and litigation, our results point to the need to revive test bias research in preemployment testing. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
8.
Sackett, Borneman, and Connelly’s (see record 2008-05553-001) article and recent meta-analyses (e.g., Kuncel & Hezlett, 2007) should lay to rest any doubt over whether high-stakes standardized tests predict important academic and professional outcomes—they do. The challenge now is to identify noncognitive individual differences that determine the same outcomes. Noncognitive is, of course, a misnomer. Every psychological process is cognitive in the sense of relying on the processing of information of some kind. Why do so many psychologists, including myself, resort to the term noncognitive despite its obvious inappropriateness? (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
9.
Notes that the majority of mental alertness tests used in the employment situation are speed tests in which the time factor plays an important role. Those of us working in business and industry are sometimes asked whether such speed tests unduly penalize the "slow and accurate individual" who might be a very satisfactory worker. Some information relative to this question is presented. It is concluded that it appears that individuals performing relatively poorly on a mental alertness test under timed conditions will not appreciably change their standing in the group when permitted to complete the test with no time limit. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
10.
Sackett, Borneman, and Connelly (see record 2008-05553-001) argued that several common criticisms of cognitively laden tests are not well supported by the literature. The authors’ systematic exploration of research surrounding seven specific criticisms is laudable, and we do not find fault with their conclusions as presented. In evaluating the seven concerns, however, the authors largely neglected the criteria that such tests are intended to predict. As a result, readers may come away with the erroneous conclusion that all is well in the mass testing world of cognitive ability. We wish to expand on Sackett et al.’s review by raising concerns about traditional approaches to defining academic and organizational success. In doing so, we argue for the importance of creativity. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
11.
In defending tests of cognitive abilities, knowledge, or skills (CAKS) from the skepticism of their “family members, friends, and neighbors” and aiding psychologists forced to defend tests from “myth and hearsay” in their own skeptical social networks (p. 215), Sackett, Borneman, and Connelly (see record 2008-05553-001) focused on evaluating validity coefficients, racial or gender group differences, and fair assessment research. In doing so, they concluded that CAKS tests generally yield valid and fair test scores for their intended purposes, but because the authors did not adequately attend to (a) research design issues (e.g., inclusion of independent or predictor variables [IPV] and dependent variables or criteria), (b) statistical assumptions underlying interpretation of their analyses (e.g., bivariate normality of distributions of test scores and criteria), and (c) conceptual concerns (e.g., whether racial categories should be used as explanatory constructs), alternative conclusions about CAKS test score validity and fairness are plausible. Although all of the foregoing areas of concern are germane to each of the assertions addressed by Sackett et al. (2008), the focus here is on Assertions 6 through 8 (p. 216; hereinafter called the fairness assertions [FA]) because making accurate inferences about fairness requires measurement experts to engage in a paradigmatic shift where sociodemographic groups (e.g., Blacks, Latinos/Latinas) are concerned, whereas, for the most part, addressing the other assertions merely requires a reminder of which standard psychometric principles have not been followed (American Educational Research Association [AERA], American Psychological Association [APA], & National Council on Measurement in Education [NCME], 1999). (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
12.
We consider the situation in which a learner must induce the rule that explains an observed set of data but the hypothesis space of possible rules is not explicitly enumerated or identified. The first part of the article demonstrates that as long as hypotheses are sparse (i.e., index less than half of the possible entities in the domain) then a positive test strategy is near optimal. The second part of this article then demonstrates that a preference for sparse hypotheses (a sparsity bias) emerges as a natural consequence of the family resemblance principle; that is, it arises from the requirement that good rules index entities that are more similar to one another than they are to entities that do not satisfy the rule. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
13.
Sackett, Borneman, and Connelly (see record 2008-05553-001) recently discussed several criticisms that are often raised against the use of cognitive tests in selection. One criticism concerns the issue of measurement bias in cognitive ability tests with respect to specific groups in society. Sackett et al. (2008) stated that “absent additional information, one cannot determine whether mean differences [in test scores] reflect true differences in the developed ability being measured or bias in the measurement of that ability” (p. 222). Their discussion of measurement bias appears to suggest that measurement bias in tests can be accurately detected through the study of differential prediction of criteria across groups. In this comment, we argue that this assertion is incorrect. In fact, it has been known for more than a decade that tests of differential regression are not generally diagnostic of measurement bias (Millsap, 1997, 1998, 2008). (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
14.
This study fills a key gap in research on response instructions in situational judgment tests (SJTs). The authors examined whether the assumptions behind the differential effects of knowledge and behavioral tendency SJT response instructions hold in a large-scale high-stakes selection context (i.e., admission to medical college). Candidates (N = 2,184) were randomly assigned to a knowledge or behavioral tendency response instruction SJT, while SJT content was kept constant. Contrary to prior research in low-stakes settings, no meaningfully important differences were found between mean scores for the response instruction sets. Consistent with prior research, the SJT with knowledge instructions correlated more highly with cognitive ability than did the SJT with behavioral tendency instructions. Finally, no difference was found between the criterion-related validity of the SJTs under the two response instruction sets. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
15.
Monroe H. Freeman is Chairman of the National Capital Area Civil Liberties Union, consultant to the Educational Testing Service, and a member of the Test Development and Research Committee of the Law School Admission Test. His plea to psychologists ist that each psychologist bring to bear their professional training and their special insights to evaluate, or reevaluate, the proper role and the unique responsibilities of the psychologist with respect to people who are not voluntarily seeking psychological counseling, but who are compelled to be tested as a condition to employment. He discusses three elements of the problem: the sociopolitical aspects, the impact on individuals, and the question of professional responsibility. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
16.
Stitzer Maxine L.; Petry Nancy; Peirce Jessica; Kirby Kimberly; Killeen Therese; Roll John; Hamilton John; Stabile Patricia Q.; Sterling Robert; Brown Chanda; Kolodner Ken; Li Rui 《Canadian Metallurgical Quarterly》2007,75(5):805
Intake urinalysis test result (drug positive vs. negative) has been previously identified as a strong predictor of drug abuse treatment outcome, but there is little information about how this prognostic factor may interact with the type of treatment delivered. The authors used data from a multisite study of abstinence incentives for stimulant abusers enrolled in outpatient counseling treatment (N. M. Petry, J. M. Peirce, et al., 2005) to examine this question. The first study urine was used to stratify participants into stimulant negative (n = 306) versus positive (n = 108) subgroups. Abstinence incentives significantly improved retention in those testing negative but not in those testing positive. Findings suggest that stimulant abusers presenting to treatment with a stimulant-negative urine benefit from abstinence incentives, but alternative treatment approaches are needed for those who test stimulant positive at intake. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
17.
Sackett Paul R.; Borneman Matthew J.; Connelly Brian S. 《Canadian Metallurgical Quarterly》2008,63(4):215
The authors review criticisms commonly leveled against cognitively loaded tests used for employment and higher education admissions decisions, with a focus on large-scale databases and meta-analytic evidence. They conclude that (a) tests of developed abilities are generally valid for their intended uses in predicting a wide variety of aspects of short-term and long-term academic and job performance, (b) validity is not an artifact of socioeconomic status, (c) coaching is not a major determinant of test performance, (d) tests do not generally exhibit bias by underpredicting the performance of minority group members, and (e) test-taking motivational mechanisms are not major determinants of test performance in these high-stakes settings. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
18.
Reviews the book, Special education and rehabilitation testing: Practical applications and test reviews edited by Brian Bolton (no publication year provided). Special Education and Rehabilitation Testing is a reference manual designed to facilitate the identification and selection of appropriate tests for use in the assessment and diagnosis of persons with handicaps. It should be emphasized that, despite the title's implication, this is not a how-to manual for conducting diagnostic evaluations of persons with handicaps. The present volume is part of the Applied Testing Series under the general editorship of Daniel Keyser and Richard Sweetland. Each volume in the Applied Testing Series is edited by a specialist in the topical area and draws upon and focuses reviews contained in the Test Critiques reference volumes (Keyser & Sweetland, 1984-1988) and Tests: A Comprehensive Reference for Assessments in Psychology, Education, and Business (Sweetland & Kaiser, 1986). As editor, Brian Bolton selected 95 psychoeducational tests for inclusion in this volume. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
19.
Selected literature related to statistical testing is reviewed to compare the theoretical models underlying parametric and nonparametric inference. Specifically, we show that these models evaluate different hypotheses, are based on different concepts of probability and resultant null distributions, and support different substantive conclusions. We suggest that cognitive scientists should be aware of both models, thus providing them with a better appreciation of the implications and consequences of their choices among potential methods of analysis. This is especially true when it is recognized that most cognitive science research employs design features that do not justify parametric procedures, but that do support nonparametric methods of analysis, particularly those based on the method of permutation/randomization. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
20.
The Gordon Personal Profile was administered to junior and senior high school students for vocational guidance purposes. Three months later it was readministered as an employment test to students applying for jobs. Those not seeking jobs took the test again as a guidance test… .Individuals did not change their profile patterns substantially from a guidance situation to an employment situation, and mean increases for the group were found to be moderate." (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献