首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
In this article, the authors compare 3 generic models of the cognitive processes in a categorization task. The cue abstraction model implies abstraction in training of explicit cue-criterion relations that are mentally integrated to form a judgment, the lexicographic heuristic uses only the most valid cue, and the exemplar-based model relies on retrieval of exemplars. The results from 2 experiments showed that, in lieu of the lexicographic heuristic, most participants spontaneously integrate cues. In contrast to single-system views, exemplar memory appeared to dominate when the feedback was poor, but when the feedback was rich enough to allow the participants to discern the task structure, it was exploited for abstraction of explicit cue-criterion relations. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
This article describes an integration of most of the disparate likelihood judgment phenomena in behavioral decision making using a mathematical memory model. A new theory of likelihood judgments based on D. L. Hintzman's (1984, 1988) MINERVA2 memory model is described. The model, MINERVA-DM (DM?=?decision making), accounts for a wide range of likelihood judgment phenomena including frequency judgments, conditional likelihood judgments, conservatism, the availability and representativeness heuristics, base-rate neglect, the conjunction error, the validity effect, the simulation heuristic, and the hindsight bias. In addition, the authors extend the model to expert probability judgment and show how MINERVA-DM can account for both good and poor calibration (overconfidence) as a function of varying degrees of expertise. The authors' work is presented as a case study of the advantages of applying memory theory to study decision making. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

3.
The authors examined the cognitive processes that participants use in linear and nonlinear multiple-cue judgment tasks, hypothesizing that people are unable to use explicit cue abstraction in a nonlinear task, instead turning to exemplar memory. Experiment 1 confirmed that people are unable to use cue abstraction in nonlinear tasks but failed to confirm the hypothesized, spontaneous shift to exemplar memory. Instead, the participants appeared to be trapped in persistent and futile attempts to abstract the cue-criterion relations. Only after being instructed to rely on exemplar memory in Experiment 2 did they master the nonlinear task. The results suggest that adaptive shifts of representation need not occur spontaneously and that analytical thought may sometimes harm performance in nonlinear tasks. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Age differences in bias in conditional probability judgments were investigated based on predictions derived from the Minerva-Decision Making model (M. R. P. Dougherty, C. F. Gettys, & E. E. Ogden, 1999), a global matching model of likelihood judgment. In this study, 248 younger and older adults completed frequency judgment and conditional probability judgment tasks. Age differences in the frequency judgment task are interpreted as an age-related deficit in memory encoding. Older adults' stronger biases in the probability judgment task point to age differences in criterion setting. Age-related biases were eliminated when age groups were equated on memory encoding by means of study time manipulation. The authors conclude that older adults' stronger judgment biases are a function of memory impairment. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
This article introduces 2 new sources of bias in probability judgment, discrimination failure and inhibition failure, which are conceptualized as arising from an interaction between error prone memory processes and a support theory like comparison process. Both sources of bias stem from the influence of irrelevant information on participants' probability judgments, but they postulate different mechanisms for how irrelevant information affects judgment. The authors used an adaptation of the proactive interference (PI) and release from PI paradigm to test the effect of irrelevant information on judgment. The results of 2 experiments support the discrimination failure account of the effect of PI on probability judgment. In addition, the authors show that 2 commonly used measures of judgment accuracy, absolute and relative accuracy, can be dissociated. The results have broad implications for theories of judgment. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Research with general knowledge items demonstrates extreme overconfidence when people estimate confidence intervals for unknown quantities, but close to zero overconfidence when the same intervals are assessed by probability judgment. In 3 experiments, the authors investigated if the overconfidence specific to confidence intervals derives from limited task experience or from short-term memory limitations. As predicted by the naive sampling model (P. Juslin, A. Winman, & P. Hansson, 2007), overconfidence with probability judgment is rapidly reduced by additional task experience, whereas overconfidence with intuitive confidence intervals is minimally affected even by extensive task experience. In contrast to the minor bias with probability judgment, the extreme overconfidence bias with intuitive confidence intervals is correlated with short-term memory capacity. The proposed interpretation is that increased task experience is not sufficient to cure the overconfidence with confidence intervals because it stems from short-term memory limitations. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
There has been a controversy on whether working memory can guide attentional selection. Some researchers have reported that the contents of working memory guide attention automatically in visual search (D. Soto, D. Heinke, G. W. Humphreys, & M. J. Blanco, 2005). On the other hand, G.F. Woodman and S. J. Luck (2007) reported that they could not find any evidence of attentional capture by working memory. In the present study, we tried to find an integrative explanation for the different sets of results. We report evidence for attentional capture by working memory, but this effect was eliminated when search was perceptually demanding or the onset of the search was delayed long enough for cognitive control of search to be implemented under particular conditions. We suggest that perceptual difficulty and the time course of cognitive control as important factors that determine when information in working memory influences attention. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Realistic confidence judgments are essential to everyday functioning, but few studies have addressed the issue of age differences in overconfidence. Therefore, the authors examined this issue with probability judgment and intuitive confidence intervals in a sample of 122 healthy adults (ages: 35-40, 55-60, 70-75 years). In line with predictions based on the na?ve sampling model (P. Juslin, A. Winman, & P. Hansson, 2007), substantial format dependence was observed, with extreme overconfidence when confidence was expressed as an intuitive confidence interval but not when confidence was expressed as a probability judgment. Moreover, an age-related increase in overconfidence was selectively observed when confidence was expressed as intuitive confidence intervals. Structural equation modeling indicated that the age-related increases in overconfidence were mediated by a general cognitive ability factor that may reflect executive processes. Finally, the results indicated that part of the negative influence of increased age on general ability may be compensated for by an age-related increase in domain-relevant knowledge. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
The categorization of inductive reasoning into largely automatic processes (heuristic reasoning) and controlled analytical processes (rule-based reasoning) put forward by dual-process approaches of judgment under uncertainty (e.g., K. E. Stanovich & R. F. West, 2000) has been primarily a matter of assumption with a scarcity of direct empirical findings supporting it. The present authors use the process dissociation procedure (L. L. Jacoby, 1991) to provide convergent evidence validating a dual-process perspective to judgment under uncertainty based on the independent contributions of heuristic and rule-based reasoning. Process dissociations based on experimental manipulation of variables were derived from the most relevant theoretical properties typically used to contrast the two forms of reasoning. These include processing goals (Experiment 1), cognitive resources (Experiment 2), priming (Experiment 3), and formal training (Experiment 4); the results consistently support the author's perspective. They conclude that judgment under uncertainty is neither an automatic nor a controlled process but that it reflects both processes, with each making independent contributions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Categorization and multiple-cue judgment are similar tasks, but the influential models in the two areas are different in terms of the computations, processes, and neural substrates that they imply. In categorization, exemplar memory is often emphasized, whereas multiple-cue judgment generally is interpreted in terms of integration of Cues that have been abstracted in training. In 3 experiments the authors investigated whether these conclusions derive from genuine differences in the processes or are accidental to the different research methods. The results revealed large individual differences and a shift from exemplar memory to cue abstraction when the criterion is changed from a binary to a continuous variable, especially for a probabilistic criterion. People appear to switch between qualitatively distinct processes in the 2 tasks. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Laws and guidelines regulating legal decision making are often imposed without taking the cognitive processes of the legal decision maker into account. In the case of sentencing, this raises the question of whether the sentencing decisions of prosecutors and judges are consistent with legal policy. Especially in handling low-level crimes, legal personnel suffer from high case loads and time pressure, which can make it difficult to comply with the often complex rulings of the law. To understand the cognitive processes underlying sentencing decisions, an analysis of trial records in cases of larceny, fraud, and forgery was conducted. Applying a Bayesian approach, five models of human judgment were tested against each other to predict the sentencing recommendations of the prosecution and to identify the crucial factors influencing sentencing decisions. The factors influencing sentencing were broadly consistent with the penal code. However, the prosecutors considered only a limited number of factors and neglected factors that were legally relevant and rated as highly important. Furthermore, testing the various cognitive judgment models against each other revealed that the sentencing process was apparently not consistent with the judgment policy recommended by the legal literature. Instead, the results show that prosecutors’ sentencing recommendations were best described by the mapping model, a heuristic model of quantitative estimation. According to this model, sentencing recommendations rely on a categorization of cases based on the cases’ characteristics. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
A memory processes account of the calibration of probability judgments was examined. A multiple-trace memory model, Minerva-Decision Making (MDM; M. R. P. Dougherty, C. F. Gettys, & E. E. Ogden, 1999), used to integrate the ecological (Brunswikian) and the error (Thurstonian) models of overconfidence, is described. The model predicts that overconfidence should decrease both as a function of experience and as a function of encoding quality. Both increased experience and improved encoding quality result in lower variance in the output of the model, which in turn leads to improved calibration. Three experiments confirmed these predictions. Implications of MDM's account of overconfidence are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
14.
Empirical data from 2 experiments with undergraduate Ss confirmed the format dependence predicted by the combined error model (P. Juslin, H. Olsson, & M. Bj?rkman, see record 1997-05932-003). Format dependence refers to the simultaneous observation of over/underconfidence in judgment for the same tasks depending on the choice of response format. The ordering of the over/underconfidence effects with the half-range, full-range, and interval estimation formats was correctly predicted by the model, but the assumption of unbiased cognitive processing perturbed by random error underpredicted overconfidence with interval estimation. The estimation and removal of the effect of anchoring-and-adjustment in Experiment 2 suggested that this heuristic alone is unable to account for the overconfidence with interval estimation, whereas the joint effect of the combined error model and anchoring-and-adjustment can explain the data. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
J. D. Smith and colleagues (J. P. Minda & J. D. Smith, 2001; J. D. Smith & J. P. Minda, 1998, 2000; J. D. Smith, M. J. Murray, & J. P. Minda, 1997) presented evidence that they claimed challenged the predictions of exemplar models and that supported prototype models. In the authors' view, this evidence confounded the issue of the nature of the category representation with the type of response rule (probabilistic vs deterministic) that was used. Also, their designs did not test whether the prototype models correctly predicted generalization performance. The present work demonstrates that an exemplar model that includes a response-scaling mechanism provides a natural account of all of Smith et al's experimental results. Furthermore, the exemplar model predicts classification performance better than the prototype models when novel transfer stimuli are included in the experimental designs. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Thomas K. Srull.     
Presents an overview of the career of Thomas K. Srull and his contributions to the field of psychology. For theoretical, empirical and methodological contribution to knowledge about the cognitive underpinnings of social behavior and personality; for major advances in our understanding of the mental representations of individuals and groups, the cognitive processes that underlie their construction, and the use of these representations in making judgments; for ground-breaking research on the role of concept accessibility in the interpretation of social information; and for contributing to the interfaces among cognitive, social and personality psychology. His research, has provided important insights into the dynamics of social memory and the relation between memory and judgment. His areas of influence range from basic cognitive and social psychology to applied research in consumer behavior. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
18.
The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model—the mapping model—that outperformed the exemplar model in a task thought to promote exemplar-based processing. This raised questions about the assumptions of rule-based versus exemplar-based models that underlie the notion of task contingency of cognitive processes. Rule-based models, such as the mapping model, assume the abstraction of explicit task knowledge. In contrast, exemplar models should profit if storage and activation of the exemplars is facilitated. Two studies tested the importance of the two models’ assumptions. When knowledge about cues existed, the rule-based mapping model predicted quantitative estimations best. In contrast, when knowledge about the cues was difficult to gain, participants’ estimations were best described by an exemplar model. The results emphasize the task contingency of cognitive processes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Many previous studies investigating long-term cognitive impairments following traumatic brain injury (TBI) have focused on extremely severely injured patients, relied on subjective reports of change and failed to use demographically relevant control data. The aim of this study was to investigate cognitive impairments 10 years following TBI and their association with injury severity. Sixty TBI and 43 control participants were assessed on tests of attention, processing speed, memory, and executive function. The TBI group demonstrated significant cognitive impairment on measures of processing speed (Symbol Digit Modalities Test [SDMT], Smith, 1973; Digit Symbol Coding, Wechsler, 1997), memory (Rey Auditory Verbal Learning Test [RAVLT]; Rey, 1958; Lezak, 1976), Doors and People tests; Baddeley, Emslie & Nimmo-Smith, 1994) and executive function (Hayling C [Burgess & Shallice, 1997] and SART errors, Robertson, Manly, Andrade, Baddeley & Yiend, 1997). Logistic Regression analyses indicated that the SDMT, Rey AVLT and Hayling C and SART errors most strongly differentiated the groups in the domains of attention/processing speed, memory and executive function, respectively. Greater injury severity was significantly correlated with poorer test performances across all domains. This study shows that cognitive impairments are present many years following TBI and are associated with injury severity. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Loss aversion and reference dependence are 2 keystones of behavioral theories of choice, but little is known about their underlying cognitive processes. We suggest an additional account for loss aversion that supplements the current account of the value encoding of attributes as gains or losses relative to a reference point, introducing a value construction account. Value construction suggests that loss aversion results from biased evaluations during information search and comparison processes. We develop hypotheses that identify the influence of both accounts and examine process-tracing data for evidence. Our data suggest that loss aversion is the result of the initial direct encoding of losses that leads to the subsequent process of directional comparisons distorting attribute valuations and the final choice. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号