首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This research questioned the proposition that corporate familiarity is positively associated with firm reputation. Student images of familiar and unfamiliar Fortune 500 corporations were examined in 4 experiments. The results suggested that, consistent with behavioral decision theory and attitude theory, highly familiar corporations provide information that is more compatible with the tasks of both admiring and condemning than less familiar corporations. Furthermore, the judgment context may determine whether positive or negative judgments are reported about familiar companies. The notion that people can simultaneously hold contradictory images of well-known firms may help to explain the inconsistent findings on the relation between familiarity and reputation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
The aim of this study was to estimate the reliability of the pre- to posttreatment change scores for 3 different self-image aspects, Attack, Love, and Control. To measure self-image, we used the Norwegian version of the introject surface of Benjamin's (1974) structural analysis of social behavior. The article introduces Generalizability (G-) theory, combined with the recent concept of tolerance for error, as a framework for estimating the reliability and precision of change scores in 1- and 2-facet designs. Data were obtained from the Norwegian Multi-Site Study of Process and Outcome in Psychotherapy, including 291 outpatients. The mean number of treatment sessions was 47. The results show that change scores may be highly reliable. Generalizability coefficients resting on the relative and absolute score interpretations, respectively, for both the Love and Attack change scores reached acceptable levels. The reliability of the Control change score was, however, poor. G-theory combined with the error-tolerance concept proved to be a helpful framework for assessing the dependability of change scores in a psychotherapy research setting. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Commentary on an article by P. J. Silvia et al. (see record 2008-05954-001) which discusses the topic of divergent thinking. On several occasions I have suggested that a modified scientific method be used in studies of creativity (Runco, 1994a, 1999, 2006). This is a fairly contrarian suggestion because it implies a less-than-maximally objective perspective. Yet creativity will never be fully understood using the traditional scientific approach. This is in part because creativity requires originality, and the novelty that signifies originality is typically unpredictable, or at least not predictable with much precision. Perhaps more important for a modified scientific approach is the fact that creativity depends on affect, intuition, and other processes which cannot be accurately described using only objective terms. Yet at the same time, we should be as objective as possible. And although I am intrigued by generalizability theory, as described by Silvia, Winterstein, Willse, Barona, Cram, Hess, Martinez, and Richard (2008), I am concerned about their decision to use subjective scoring of divergent thinking tests. Their rationale is weak, to be blunt about it, and they have overlooked some critical research on the topic. In this commentary, I could describe the attraction of generalizability theory, but Silvia et al. do a more than adequate job of that. So instead, I will try to fill some gaps in their review of the research on divergent thinking. I also have a few questions with several of their claims and methods. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Divergent thinking is central to the study of individual differences in creativity, but the traditional scoring systems (assigning points for infrequent responses and summing the points) face well-known problems. After critically reviewing past scoring methods, this article describes a new approach to assessing divergent thinking and appraises its reliability and validity. In our new Top 2 scoring method, participants complete a divergent thinking task and then circle the 2 responses that they think are their most creative responses. Raters then evaluate the responses on a 5-point scale. Regarding reliability, a generalizability analysis showed that subjective ratings of unusual-uses tasks and instances tasks yield dependable scores with only 2 or 3 raters. Regarding validity, a latent-variable study (n=226) predicted divergent thinking from the Big Five factors and their higher-order traits (Plasticity and Stability). Over half of the variance in divergent thinking could be explained by dimensions of personality. The article presents instructions for measuring divergent thinking with the new method. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Presents issue highlights which includes the first debate in the journal's history. Several commentaries are presented in response to the target article "Assessing creativity with divergent thinking tasks: Exploring the reliability and validity of new subjective scoring methods," by Paul J. Silvia, Beate P. Winterstein, John T. Willse, Christopher Barona, Joshua Cram, Karl I. Hess, Jenna L. Martinez, and Crystal A. Richard (see record 2008-05954-001). These responses raised numerous insightful questions, new ideas, and pointed critiques to the lead article. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
The construct validity of the Eating Disorder Inventory (EDI) was examined in 3 samples. An archival clinic sample (n=318) of women completed the EDI, a structured interview, and the Millon Clinical Multiaxial Inventory--II (MCMI-II). Confirmatory factor analyses (CFAs) indicated that neither null nor 1-factor models of the EDI fit item-level or item-parcel data. The proposed 8-factor model did not fit at the item level but did fit item-parcel data. Reliability estimates of the 8 scales ranged from .82 to .93, and low-to-moderate interscale correlations among the eating and weight-related scales provided partial support for convergent validity. EDI personality scales showed moderate interscale correlations and were associated with MCMI-II scales. A final CFA of the EDI scales supported a 2-factor model (Eating and Weight, Personality) of the 8 EDI scales. Strong associations between depression and several EDI scale scores were found in a treatment study sample (n=50). The archival clinic sample scored significantly higher on the 8 EDI scales than the nonpatient college comparison sample (n=487). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Commentary on an article by P. J. Silvia et al. (see record 2008-05954-001) which discusses the topic of divergent thinking. More than 40 years have passed since the publication of the Wallach and Kogan (1965) volume, and yet it continues to draw both praise and criticism from researchers in the creativity field. The Silvia et al. (2008) article tilts more strongly to criticism than to praise, and accordingly, one of the editors of this journal (JCK) kindly offered me an opportunity to respond. I do so with some hesitancy, as I am no longer an active divergent-thinking (DT) researcher. This gap in active involvement as a DT researcher was not a severe handicap for me in appraising the Silvia et al. (2008) article. Issues of reliability and validity of DT measures--the central concern of that article--have preoccupied investigators ever since Guilford's (1950) original formulation, a preoccupation that I shared. At the outset, I should state that I consider the Silvia et al. (2008) treatment of those fundamental issues to be methodologically sound. My intent in the present commentary, rather, is to demonstrate that the Zeitgeist at the time of the Wallach and Kogan (1965) study was quite different from that prevailing today. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Commentary on an article by P. J. Silvia et al. (see record 2008-05954-001) which discusses the topic of divergent thinking. Silvia et al.'s (2008) primary motivations for exploring and proposing their subjective scoring method are their perceived deficiencies of current divergent thinking tests. First, scores on divergent thinking tests frequently correlate highly with general intelligence. Second, the scoring of divergent thinking tests has changed little since the 1960s. Third, the necessity of instructing people to be creative prior to taking divergent thinking tests is integral to obtaining useful responses and needs to be reaffirmed. Fourth, and finally, the problems posed by uniqueness scoring--confounding with fluency, ambiguity of rarity, and the seeming "penalty" imposed on large samples--need to be addressed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
2 studies were done on the congruence of reputation and overt behavior. Ss were 255 5th- and 6th-grade boys. The Peer Nomination Inventory (Wiggins & Winder, 1961) was used to assess the reputation of each boy for Aggression, Dependency, Withdrawal, Depression, and Likeability. Ss were assigned to high-,medium-, and low-aggression reputation groups, and to analogous dependency reputation groups. Then, Ss were observed, respectively, in a Situational Test of Aggression and a Situational Test of Dependency. Findings support the conclusion that reputation is predictive of overt interpersonal behavior. A tentative conclusion is that overt dependency and overt aggression are less closely related than are those aspects of reputation. More specifically, the results are a partial validation of the Aggression and Dependency scales of the Peer Nomination Inventory. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Commentary on an article by P. J. Silvia et al. (see record 2008-05954-001) which discusses the topic of divergent thinking. It is certainly true, as Silvia et al. (2008) write, that "after half a century of research, the evidence for global creative ability ought to be better" (p. 68). The authors believe--incorrectly, I think--that the reason that divergent thinking tests have not done a better job can be found in the various scoring systems that have been used when assessing divergent thinking ability. I have presented evidence elsewhere that creativity is not a general ability or set of traits or dispositions that can be applied across domains (Baer, 1991, 1993, 1994a, 1994b, 1998). In those studies, I used Amabile's (1982, 1996) Consensual Assessment Technique (which is the basis for the subjective scoring technique proposed by Silvia et al. [2008]) to judge the creativity of a wide range of artifacts. What I found was that there is little correlation among the creativity ratings received by subjects across domains, and what little there is tends to disappear if an IQ test is also given and variance attributable to intelligence is first removed. The evidence presented thus far for Silvia et al.'s (2008) proposed method for scoring responses to divergent thinking tasks has far too many flaws to allow any confidence in its use. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Replies to comments by M. D. Mumford et al. (see record 2008-05954-002), J. Baer (see record 2008-05954-003), M. A. Runco (see record 2008-05954-004), K. H. Kim (see record 2008-05954-005), N. Kogan (see record 2008-05954-006), and S. Lee (see record 2008-05954-007) on the current authors' original article on divergent thinking (see record 2008-05954-001). In this reply, the authors examine the madness to their method in light of the comments. Overall, the authors agree broadly with the comments; many of the issues will be settled only by future research. The authors disagree, though, that past research has proven past scoring methods--including the Torrance methods--to be satisfactory or satisfying. The authors conclude by offering their own criticisms of their method, of divergent thinking, and of the concept of domain-general creative abilities. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Reports an error in the article entitled "Accuracy and Generalizability of an Automated MMPI Interpretation System," by David Lachar (Journal of Consulting & Clinical Psychology. Vol. 42(2) Apr 1974, 267-273). Three references to tables on p. 270 appeared incorrectly. The sentences should read as follows: In column 1, the last sentence should read: A distribution of the frequency of the 51 code paragraphs is presented in Table 3. The second sentence under the paragraph heading Narrative Accuracy should read: The distribution of these 1,410 ratings appears in Table 4. The first sentence in Paragraph 2 should read: Table 1 presents the levels of the six variables included in the linear regression analysis of variance of overall narrative ratings. (The following abstract originally appeared in record 1974-27670-001). Evaluated narrative paragraph types and total reports of a new MMPI clinical interpretation simulation program. Complete documentation of this system and notation of accuracy and frequency of individual statements are provided elsewhere. MMPI interpretations of 1,410 patients who received psychiatric evaluations were judged by the clinicians who saw these patients. 107 paragraphs appeared 7,555 times and were rated inaccurate less than 10% of the time. 91% of these reports received overall favorable ratings. A linear regression analysis of variance of overall narrative ratings with 2 narrative and 4 patient variables suggested that this system has considerable generalizability. Narrative Type * Patient Source and Patient Age * Patient Source interactions are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Commentary on an article by P. J. Silvia et al. (see record 2008-05954-001) which discusses the topic of divergent thinking. In Study 1, Silvia et al. (2008) criticized the uniqueness scoring of Wallach and Kogan (1965). The uniqueness scoring has a virtue that single rater may be enough to rate, and it is characterized by the assignment of points to uncommon responses in a pool of sample's responses. The first criticism for uniqueness scoring is that uniqueness scores increase as a subject produces more responses, resulting in confounding of uniqueness and fluency. The second criticism relates to the ambiguity of statistical rarity pursued by uniqueness scoring in that uniqueness does not guarantee creativity. When a mundane unique response is misperceived as creative, reliability is threatened. Some bizarre, grotesque, or inappropriate responses in the pool of responses may be assigned a point, causing the validity to be threatened. The third criticism raised by the authors is that the uniqueness scoring system penalizes large samples in that it is less probable for a response in a larger sample of people to appear unique. However, the subjective scoring system has other deficits and is never free from the first two criticisms. The third criticism is, however unfounded; rather, the uniqueness scoring system is in a better position to capture the construct of creativity through better accessibility to large samples. The authors' (Silvia et al., 2008) three criticisms will be discussed one by one. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Laboratory studies of psychopathy have yielded an impressive array of etiologically relevant findings. To date, however, attempts to demonstrate the generalizability of these findings to African American psychopathic offenders have been largely unsuccessful. The fear deficit has long been regarded as the hallmark of psychopathy, yet the generalizability of this association to African American offenders has not been systematically evaluated. In this study, we used an instructed fear paradigm and fear-potentiated startle to assess this deficit and the factors that moderate its expression in African American offenders. Furthermore, we conceptualized psychopathy using both a unitary and a two-factor model, and we assessed the constructs with both interview-based and self-report measures. Regardless of assessment strategy, results provided no evidence that psychopathy relates to fear deficits in African American offenders. Further research is needed to clarify whether the emotion deficits associated with psychopathy in European American offenders are applicable to African American offenders. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
The Kaufman Assessment Battery for Children-Second Edition (KABC-II) is a departure from the original KABC in that it allows for interpretation via two theoretical models of intelligence. This study had two purposes: to determine whether the KABC-II measures the same constructs across ages and to investigate whether those constructs are consistent with Cattell-Horn-Carroll (CHC) theory. Multiple-sample analyses were used to test for equality of the variance-covariance matrices across the 3- to 18-year-old sample. Higher-order confirmatory factor analyses were used to compare the KABC-II model with rival CHC models for children ages 6 to 18. Results show that the KABC-II measures the same constructs across all ages. The KABC-II factor structure for school-age children is aligned closely with five broad abilities from CHC theory, although some inconsistencies were found. Models without time bonuses fit better than those with time bonuses. The results provide support for the construct validity of the KABC-II. Additional research is needed to more completely understand the measurement of fluid reasoning and the role of time bonuses on some tasks. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Commentary on an article by P. J. Silvia et al. (see record 2008-05954-001) which discusses the topic of divergent thinking. Although their findings appear reasonable, we have two key concerns with regard to these conclusions, 1) the strength of the available construct validity evidence, and 2) the substantive logic underlying the study. The next generation of measures should be applying a domain, strategy, process approach rather than the domain-free, output-based approach recommended by Silvia et al. (2008). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
As a basis for theories of psychopathology, clinical psychology and related disciplines need sound taxonomies that are generalizable across diverse populations. To test the generalizability of a statistically derived 8-syndrome taxonomic model for youth psychopathology, confirmatory factor analyses (CFAs) were performed on the Youth Self-Report (T. M. Achenbach & L. A. Rescorla, 2001) completed by 30,243 youths 11-18 years old from 23 societies. The 8-syndrome taxonomic model met criteria for good fit to the data from each society. This was consistent with findings for the parent-completed Child Behavior Checklist (Achenbach & Rescorla, 2001) and the teacher-completed Teacher's Report Form (Achenbach & Rescorla, 2001) from many societies. Separate CFAs by gender and age group supported the 8-syndrome model for boys and girls and for younger and older youths within individual societies. The findings provide initial support for the taxonomic generalizability of the 8-syndrome model across very diverse societies, both genders, and 2 age groups. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
19.
"The renaming of the process of building a theory of behavior by the new term 'construct validity' contributes nothing to the understanding of the process nor to the usefulness of the concepts. The introduction into discussion of psychological theorizing of the aspects of construct validity discussed… creates, at best, unnecessary confusion and at worst, a nonempirical, nonscientific approach to the study of behavior." Terminology of logical behaviorism and techniques of an "operational methodology" are preferred. "It is… recommended that the formulation of construct validity, as presented in the several papers noted in this critique, be eliminated from further consideration as a way of speaking about psychological concepts, laws, and theories." (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
It is argued that in the next edition of Technical Recommendations for Psychological Tests and Diagnostic Techniques "there should be a considerable strengthening of a set of precautionary requirements more easily classified under construct validity than under concurrent or predictive validity as presently described." (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号