首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Tactile memory systems are involved in the storage and retrieval of information about stimuli that impinge on the body surface and objects that people explore haptically. Here, the authors review the behavioral, neuropsychological, neurophysiological, and neuroimaging research on tactile memory. This body of research reveals that tactile memory can be subdivided into a number of functionally distinct neurocognitive subsystems, just as is the case with auditory and visual memory. Some of these subsystems are peripheral and short lasting and others are more central and long lasting. The authors highlight evidence showing that the representation of tactile information interacts with information about other sensory attributes (e.g., visual, auditory, and kinaesthetic) of objects/events that people perceive. This fact suggests that at least part of the neural network involved in the memory for touch might be shared among different sensory modalities. In particular, multisensory/amodal information-processing networks seem to play a leading role in the storage of tactile information in the brain. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Are resources in visual working memory allocated in a continuous or a discrete fashion? On one hand, flexible resource models suggest that capacity is determined by a central resource pool that can be flexibly divided such that items of greater complexity receive a larger share of resources. On the other hand, if capacity in working memory is defined in terms of discrete storage “slots,” then observers may be able to determine which items are assigned to a slot but not how resources are divided between stored items. To test these predictions, the authors manipulated the relative complexity of the items to be stored while holding the number items constant. Although mnemonic resolution declined when set size increased (Experiment 1), resolution for a given item was unaffected by large variations in the complexity of the other items to be stored when set size was held constant (Experiments 2–4). Thus, resources in visual working memory are distributed in a discrete slot-based fashion, even when interitem variations in complexity motivate an asymmetrical division of resources across items. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Given a changing visual environment, and the limited capacity of visual working memory (VWM), the contents of VWM must be in constant flux. Using a change detection task, the authors show that VWM is subject to obligatory updating in the face of new information. Change detection performance is enhanced when the item that may change is retrospectively cued 1 s after memory encoding and 0.5 s before testing. The retro-cue benefit cannot be explained by memory decay or by a reduction in interference from other items held in VWM. Rather, orienting attention to a single memory item makes VWM more resistant to interference from the test probe. The authors conclude that the content of VWM is volatile unless it receives focused attention, and that the standard change detection task underestimates VWM capacity. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
The authors investigated the effects of changes in horizontal viewing angle on visual and audiovisual speech recognition in 4 experiments, using a talker's face viewed full face, three quarters, and in profile. When only experimental items were shown (Experiments 1 and 2), identification of unimodal visual speech and visual speech influences on congruent and incongruent auditory speech were unaffected by viewing angle changes. However, when experimental items were intermingled with distractor items (Experiments 3 and 4), identification of unimodal visual speech decreased with profile views, whereas visual speech influences on congruent and incongruent auditory speech remained unaffected by viewing angle changes. These findings indicate that audiovisual speech recognition withstands substantial changes in horizontal viewing angle, but explicit identification of visual speech is less robust. Implications of this distinction for understanding the processes underlying visual and audiovisual speech recognition are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Two experiments were conducted to examine whether abrupt onsets are capable of reflexively capturing attention when they occur outside the current focus of spatial attention, as would be expected if exogenous orienting operates in a truly automatic fashion. The authors established a highly focused attentional state by means of the central presentation of a stream of visual or auditory characters, which participants sometimes had to monitor. No intramodal reflexive cuing effects were observed in either audition or vision when participants performed either an exogenous visual or auditory orthogonal cuing task together with the central focused attention task. These results suggest that reflexive unimodal orienting is not truly automatic. The fact that cuing effects were eliminated under both unimodal and cross-modal conditions is consistent with the view that auditory and visual reflexive spatial orienting are controlled by a common underlying neural substrate. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Examinations of interference between verbal and visual materials in working memory have produced mixed results. If there is a central form of storage (e.g., the focus of attention; N. Cowan, 2001), then cross-domain interference should be obtained. The authors examined this question with a visual-array comparison task (S. J. Luck & E. K. Vogel, 1997) combined with various verbal memory load conditions. Interference between tasks occurred if there was explicit retrieval of the verbal load during maintenance of a visual array. The effect was localized in the maintenance period of the visual task and was not the result of articulation per se. Interference also occurred when especially large silent verbal and visual loads were held concurrently. These results suggest central storage along with code-specific passive storage. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

7.
Research indicates that false memory is lower following visual than auditory study, potentially because visual information is more distinctive. In the present study we tested the extent to which retrieval orientation can cause a modality effect on memory accuracy. Participants studied unrelated words in different modalities, followed by criterial recollection tests that selectively oriented retrieval toward one study modality at a time. Memory errors were lower when oriented toward visual than toward auditory information, thereby generalizing the modality effect to an explicit source memory task. Moreover, these effects persisted independent of the test presentation modality, indicating that retrieval orientation overrode the potential cuing properties of the test stimulus. An independent manipulation check confirmed that visual recollections were subjectively experienced as more distinctive than auditory recollections. These results suggest that retrieval orientation is sufficient to cause a modality effect on memory accuracy by focusing monitoring processes on the recollection of studied features that are diagnostic of prior presentation. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

8.
Previous studies have shown that the right hemisphere processes the visual details of objects and the emotionality of information. These two roles of the right hemisphere have not been examined concurrently. In the present study, the authors examined whether right hemisphere processing would lead to particularly good memory for the visual details of emotional stimuli. Participants viewed positive, negative, and neutral objects, displayed to the left or right of a fixation cross. Later, participants performed a recognition task in which they evaluated whether items were "same" (same visual details), "similar" (same verbal label, different visual details), or "new" (unrelated) in comparison with the studied objects. Participants remembered the visual details of negative items well, and this advantage in memory specificity was particularly pronounced when the items had been presented directly to the right hemisphere (i.e., to the left of the fixation cross). These results suggest that there is an episodic memory benefit conveyed when negative items are presented directly to the right hemisphere, likely because of the specialization of the right hemisphere for processing both visual detail and negatively valenced emotional information. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
The authors assessed effects of alcohol consumption on different types of working memory (WM) tasks in an attempt to characterize the nature of alcohol effects on cognition. The WM tasks varied in 2 properties of materials to be retained in a 2-stimulus comparison procedure. Conditions included (a) spatial arrays of colors, (b) temporal sequences of colors, (c) spatial arrays of spoken digits, and (d) temporal sequences of spoken digits. Alcohol consumption impaired memory for auditory and visual sequences but not memory for simultaneous arrays of auditory or visual stimuli. These results suggest that processes needed to encode and maintain stimulus sequences, such as rehearsal, are more sensitive to alcohol intoxication than other WM mechanisms needed to maintain multiple concurrent items, such as focusing attention on them. These findings help to resolve disparate findings from prior research on alcohol's effect on WM and on divided attention. The results suggest that moderate doses of alcohol impair WM by affecting certain mnemonic strategies and executive processes rather than by shrinking the basic holding capacity of WM. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Verbal working memory involves two major components: a phonological store that holds auditory-verbal information very briefly and an articulatory rehearsal process that allows that information to be refreshed and thus held longer in short-term memory (A. Baddeley, 1996, 2000; A. Baddeley & G. Hitch, 1974). In the current study, the authors tested two groups of patients who were chosen on the basis of their relatively focal lesions in the inferior parietal (IP) cortex or inferior frontal (IF) cortex. Patients were tested on a series of tasks that have been previously shown to tap phonological storage (span, auditory rhyming, and repetition) and articulatory rehearsal (visual rhyming and a 2-back task). As predicted, IP patients were disproportionately impaired on the span, rhyming, and repetition tasks and thus demonstrated a phonological storage deficit. IF patients, however, did not show impairment on these storage tasks but did exhibit impairment on the visual rhyming task, which requires articulatory rehearsal. These findings lend further support to the working memory model and provide evidence of the roles of IP and IF cortex in separable working memory processes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
In visual search tasks, if a set of items is presented for 1 s before another set of new items (containing the target) is added, search can be restricted to the new set. The process that eliminates old items from search is visual marking. This study investigates the kind of memory that distinguishes the old items from the new items during search. Using an accuracy paradigm in which perfect marking results in 100% accuracy and lack of marking results in near chance performance, the authors show that search can be restricted to new items not by visual short-term memory (VSTM) of old locations but by a limited capacity and slow-decaying VSTM of new locations and a high capacity and fast-decaying memory for asynchrony. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
The authors provide evidence that long-term memory encoding can occur for briefly viewed objects in a rapid serial visual presentation list, contrary to claims that the brief presentation and quick succession of objects prevent encoding by disrupting a memory consolidation process that requires hundreds of milliseconds of uninterrupted processing. Subjects performed a search task in which each item was presented for only 75 ms. Nontargets from the search task generated priming on 2 subsequent indirect memory tests: a search task and a task requiring identification of visually masked objects. Additional experiments revealed that information encoded into memory for these nontargets included perceptual and conceptual components, and that these results were not due to subjects maintaining items in working memory during list presentation. These results are consistent with recent neurophysiological evidence showing that stimulus processing can occur at later stages in the cognitive system even when a subsequent new stimulus is presented that initiates processing at earlier stages. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Three experiments examined verbal short-term memory in comparison and autism spectrum disorder (ASD) participants. Experiment 1 involved forward and backward digit recall. Experiment 2 used a standard immediate serial recall task where, contrary to the digit-span task, items (words) were not repeated from list to list. Hence, this task called more heavily on item memory. Experiment 3 tested short-term order memory with an order recognition test: Each word list was repeated with or without the position of 2 adjacent items swapped. The ASD group showed poorer performance in all 3 experiments. Experiments 1 and 2 showed that group differences were due to memory for the order of the items, not to memory for the items themselves. Confirming these findings, the results of Experiment 3 showed that the ASD group had more difficulty detecting a change in the temporal sequence of the items. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Although it is intuitive that familiarity with complex visual objects should aid their preservation in visual working memory (WM), empirical evidence for this is lacking. This study used a conventional change-detection procedure to assess visual WM for unfamiliar and famous faces in healthy adults. Across experiments, faces were upright or inverted and a low- or high-load concurrent verbal WM task was administered to suppress contribution from verbal WM. Even with a high verbal memory load, visual WM performance was significantly better and capacity estimated as significantly greater for famous versus unfamiliar faces. Face inversion abolished this effect. Thus, neither strategic, explicit support from verbal WM nor low-level feature processing easily accounts for the observed benefit of high familiarity for visual WM. These results demonstrate that storage of items in visual WM can be enhanced if robust visual representations of them already exist in long-term memory. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Five experiments examined the recency–primacy shift in which memory for early list items improves and memory for later items becomes worse as the delay between study and test increases. Experiment 1 replicated the shift in a recognition task in which the physical form of the study and test items differed, ruling out an explanation that invokes visual memory. Experiment 2 observed the change when only 1 serial position was tested, eliminating an explanation based on changing strategies or proactive interference. Experiment 3 showed a similar change from recency to primacy when the to-be-remembered stimuli were auditory. Experiments 4 and 5 demonstrated that the same recency–primacy trade-off occurs for words in a sentence. Although it is possible to offer piecemeal explanations for each experiment, the dimensional distinctiveness model accounts for the results in each of the 5 experiments in exactly the same way. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
In 2 experiments, the authors tested whether the classical modality effect--that is, the stronger recency effect for auditory items relative to visual items--can be extended to the spatial domain. An order reconstruction task was undertaken with four types of material: visual-spatial, auditory-spatial, visual-verbal, and auditory-verbal. Similar serial position curves were obtained regardless of the nature of the to-be-remembered sequences, with the exception that a modality effect was found with spatial as well as with verbal materials. The results are discussed with regard to a number of models of short-term memory. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
In this article, the authors investigated unimodal and cross-modal processes in spatial working memory. A number of locations had to be memorized within visual or haptic matrices according to different experimental conditions known to be critical in accounting for the effects of perception on imagery. Results reveal that some characteristics of the generated mental image remained strictly inherent to the modality in which information was acquired; in general, accuracy was higher when configurations were visually rather than haptically explored (Experiments 1 and 3). Interestingly, the same pattern emerged when the effects of simultaneous versus sequential processing of the stimuli inherent to vision and haptics were isolated from perceptual modality (Experiment 2). Supramodal elements were also identified (Experiment 3) that were specifically associated to the nature of the cognitive processes, regardless of the original characteristics of the sensory information. These data indicate that both unimodal modality-specific and higher order supramodal mechanisms are simultaneously used in spatial processes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Attention can be attracted faster by emotional relative to neutral information, and memory also can be strengthened for that emotional information. However, within visual scenes, often there is an advantage in memory for central emotional portions at the expense of memory for peripheral background information, called an emotion-induced memory trade-off. The authors examined how aging impacts the trade-off by manipulating valence (positive, negative) and arousal (low, high) of a central emotional item within a neutral background scene and testing memory for item and background components separately. They also assessed memory after 2 study–test delay intervals, to investigate age differences in the trade-off over time. Results revealed similar patterns of performance between groups after a short study–test delay, with both age groups showing robust memory trade-offs. After a longer delay, young and older adults showed enhanced memory for emotional items but at a cost to memory for background information only for young adults in negative arousing scenes. These results emphasize that attention and consolidation stage processes interact to shape how emotional memory is constructed in young and older adults. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Three experiments were conducted to evaluate the P300 component of the human evoked response as an index of bisensory information processing. On different blocks of trials, subjects were presented with auditory stimuli alone, visual stimuli alone, or with audiovisual compounds. In each series there were two possible stimuli, one of which was presented less frequently than the other; the subjects' task was to count the infrequent stimuli. In the first two experiments the information in the two modalities was redundant, whereas in the third the modalities provided nonredundant information. With redundant information, the P300 latency indicated bisensory facilitation when the unimodal P300 latencies were similar; when the unimodal latencies were dissimilar, the bisensory P300 occurred at the latency of the earlier unimodal P300. Reaction times paralleled P300 latency. When the information in the two modalities was nonredundant, both P300 amplitude and reaction-time data indicated interference between the two modalities, regardless of which modality was task relevant. P300 latency and reaction time did not covary in this situation. These data suggest that P300 latency and amplitude do reflect bisensory interactions and that the P300 promises to be a valuable tool for assessing brain processes during complex decision making.  相似文献   

20.
The effects of signal modality on duration classification in college students were studied with the duration bisection task. When auditory and visual signals were presented in the same test session and shared common anchor durations, visual signals were classified as shorter than equivalent duration auditory signals. This occurred when auditory and visual signals were presented sequentially in the same test session and when presented simultaneously but asynchronously. Presentation of a single modality signal within a test session, or both modalities but with different anchor durations did not result in classification differences. The authors posit a model in which auditory and visual signals drive an internal clock at different rates. The clock rate difference is due to an attentional effect on the mode switch and is revealed only when the memories for the short and long anchor durations consist of a mix of contributions from accumulations generated by both the fast auditory and slower visual clock rates. When this occurs auditory signals seem longer than visual signals relative to the composite memory representation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号