首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The nature of the information retained from previously fixated (and hence attended) objects in natural scenes was investigated. In a saccade-contingent change paradigm, participants successfully detected type and token changes (Experiment 1) or token and rotation changes (Experiment 2) to a target object when the object had been previously attended but was no longer within the focus of attention when the change occurred. In addition, participants demonstrated accurate type-, token-, and orientation-discrimination performance on subsequent long-term memory tests (Experiments 1 and 2) and during online perceptual processing of a scene (Experiment 3). These data suggest that relatively detailed visual information is retained in memory from previously attended objects in natural scenes. A model of scene perception and long-term memory is proposed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
In a change detection paradigm, a target object in a natural scene either rotated in depth, was replaced by another object token, or remained the same. Change detection performance was reliably higher when a target postcue allowed participants to restrict retrieval and comparison processes to the target object (Experiment 1). Change detection performance remained excellent when the target object was not attended at change (Experiment 2) and when a concurrent verbal working memory load minimized the possibility of verbal encoding. (Experiment 3). Together, these data demonstrate that visual representations accumulate in memory from attended objects as the eyes and attention are oriented within a scene and that change blindness derives, at least in part, from retrieval and comparison failure. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Nine experiments examined the means by which visual memory for individual objects is structured into a larger representation of a scene. Participants viewed images of natural scenes or object arrays in a change detection task requiring memory for the visual form of a single target object. In the test image, 2 properties of the stimulus were independently manipulated: the position of the target object and the spatial properties of the larger scene or array context. Memory performance was higher when the target object position remained the same from study to test. This same-position advantage was reduced or eliminated following contextual changes that disrupted the relative spatial relationships among contextual objects (context deletion, scrambling, and binding change) but was preserved following contextual change that did not disrupt relative spatial relationships (translation). Thus, episodic scene representations are formed through the binding of objects to scene locations, and object position is defined relative to a larger spatial representation coding the relative locations of contextual objects. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
This study investigated whether and how visual representations of individual objects are bound in memory to scene context. Participants viewed a series of naturalistic scenes, and memory for the visual form of a target object in each scene was examined in a 2-alternative forced-choice test, with the distractor object either a different object token or the target object rotated in depth. In Experiments 1 and 2, object memory performance was more accurate when the test object alternatives were displayed within the original scene than when they were displayed in isolation, demonstrating object-to-scene binding. Experiment 3 tested the hypothesis that episodic scene representations are formed through the binding of object representations to scene locations. Consistent with this hypothesis, memory performance was more accurate when the test alternatives were displayed within the scene at the same position originally occupied by the target than when they were displayed at a different position. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
How do observers search through familiar scenes? A novel panoramic search method is used to study the interaction of memory and vision in natural search behavior. In panoramic search, observers see part of an unchanging scene larger than their current field of view. A target object can be visible, present in the display but hidden from view, or absent. Visual search efficiency does not change after hundreds of trials through an unchanging scene (Experiment 1). Memory search, in contrast, begins inefficiently but becomes efficient with practice. Given a choice between vision and memory, observers choose vision (Experiments 2 and 3). However, if forced to use their memory on some trials, they learn to use memory on all trials, even when reliable visual information remains available (Experiment 4). The results suggest that observers make a pragmatic choice between vision and memory, with a strong bias toward visual search even for memorized stimuli. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants performed an unanticipated 2nd block of trials. When the same scene occurred in the 2nd block, the change within the scene was (a) identical to the original change, (b) a new object appearing in the original change location, (c) the same object appearing in a new location, or (d) a new object appearing in a new location. Results suggest that attention is rapidly allocated to previously relevant locations and then to previously relevant objects. This pattern of locations dominating objects remained when object identity information was made more salient. Eye tracking verified that scene memory results in more direct scan paths to previously relevant locations and objects. This contextual guidance suggests that a high-capacity long-term memory for scenes is used to insure that limited attentional capacity is allocated efficiently rather than being squandered. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
This study investigated memory from interrupted visual searches. Participants conducted a change detection search task on polygons overlaid on scenes. Search was interrupted by various disruptions, including unfilled delay, passive viewing of other scenes, and additional search on new displays. Results showed that performance was unaffected by short intervals of unfilled delay or passive viewing, but it was impaired by additional search tasks. Across delays, memory for the spatial layout of the polygons was retained for future use, but memory for polygon shapes, background scene, and absolute polygon locations was not. The authors suggest that spatial memory aids interrupted visual searches, but the use of this memory is easily disrupted by additional searches. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
A "follow-the-dot" method was used to investigate the visual memory systems supporting accumulation of object information in natural scenes. Participants fixated a series of objects in each scene, following a dot cue from object to object. Memory for the visual form of a target object was then tested. Object memory was consistently superior for the two most recently fixated objects, a recency advantage indicating a visual short-term memory component to scene representation. In addition, objects examined earlier were remembered at rates well above chance, with no evidence of further forgetting when 10 objects intervened between target examination and test and only modest forgetting with 402 intervening objects. This robust prerecency performance indicates a visual long-term memory component to scene representation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
How do animals remember what they see in daily life? The processes involved in remembering such visual information may be similar to those used in interpreting moving images on a monitor. In Experiment 1, 4 adult chimpanzees (Pan troglodytes) were required to discriminate between movies using a movie-to-movie matching-to-sample task. All chimpanzees demonstrated the ability to discriminate movies from the very 1st session onward. In Experiment 2, the ability to retain a movie was investigated through a matching-to-sample task using movie stills. To test which characteristics of movies are relevant to memory, the authors compared 2 conditions. In the continuous condition, the scenes comprising the movie progressed gradually, whereas in the discrete condition, the authors introduced a sudden change from one scene to another. Chimpanzees showed a recency effect only in the discrete condition, suggesting that composition and temporal order of scenes were used to remember the movies. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Previous studies have shown that visual attention can be captured by stimuli matching the contents of working memory (WM). Here, the authors assessed the nature of the representation that mediates the guidance of visual attention from WM. Observers were presented with either verbal or visual primes (to hold in memory, Experiment 1; to verbalize, Experiment 2; or merely to attend, Experiment 3) and subsequently were required to search for a target among different distractors, each embedded within a colored shape. In half of the trials, an object in the search array matched the prime, but this object never contained the target. Despite this, search was impaired relative to a neutral baseline in which the prime and search displays did not match. An interesting finding is that verbal primes were effective in generating the effects, and verbalization of visual primes elicited similar effects to those elicited when primes were held in WM. However, the effects were absent when primes were only attended. The data suggest that there is automatic encoding into WM when items are verbalized and that verbal as well as visual WM can guide visual attention. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Meaningful visual experience requires computations that identify objects as the same persisting individuals over time, motion, occlusion, and featural change. This article explores these computations in the tunnel effect: When an object moves behind an occluder, and then an object later emerges following a consistent trajectory, observers irresistibly perceive a persisting object, even when the pre- and postocclusion views contrast featurally. This article introduces a new change detection method for quantifying percepts of the tunnel effect. Observers had to detect color changes in displays where several objects oscillated behind occluders and occasionally changed color. Across comparisons with several types of spatiotemporal gaps, as well as manipulations of occlusion versus implosion, performance was better when objects' kinematics gave the impression of a persisting individual. The results reveal a temporal same-object advantage: better change detection across temporal scene fragments bound into the same persisting object representations. This suggests that persisting objects are the underlying units of visual memory. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

12.
What role does the initial glimpse of a scene play in subsequent eye movement guidance? In 4 experiments, a brief scene preview was followed by object search through the scene via a small moving window that was tied to fixation position. Experiment 1 demonstrated that the scene preview resulted in more efficient eye movements compared with a control preview. Experiments 2 and 3 showed that this scene preview benefit was not due to the conceptual category of the scene or identification of the target object in the preview. Experiment 4 demonstrated that the scene preview benefit was unaffected by changing the size of the scene from preview to search. Taken together, the results suggest that an abstract (size invariant) visual representation is generated in an initial scene glimpse and that this representation can be retained in memory and used to guide subsequent eye movements. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. In this study, the authors tested the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory–percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments in which manual reaction times, saccadic reaction times, and event-related potential latencies were examined. However, these experiments also showed that a slow, limited-capacity process must occur before the observer can make a manual change detection response. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
In a contextual cuing paradigm, we examined how memory for the spatial structure of a natural scene guides visual search. Participants searched through arrays of objects that were embedded within depictions of real-world scenes. If a repeated search array was associated with a single scene during study, then array repetition produced significant contextual cuing. However, expression of that learning was dependent on instantiating the original scene in which the learning occurred: Contextual cuing was disrupted when the repeated array was transferred to a different scene. Such scene-specific learning was not absolute, however. Under conditions of high scene variability, repeated search array were learned independently of the scene background. These data suggest that when a consistent environmental structure is available, spatial representations supporting visual search are organized hierarchically, with memory for functional subregions of an environment nested within a representation of the larger scene. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Eye movements were recorded while participants viewed line-drawing pictures of natural scenes in preparation for a memory test (Experiment 1) or to find a target object (Experiment 2). Initial saccades in a scene were not controlled by semantic information in the visual periphery, although fixation densities and fixation durations were affected by semantic consistency. The results are compared with earlier eye-tracking studies, and a qualitative model of eye movement control in scene perception is discussed in which initial saccades in a scene are controlled by visual but not semantic analysis. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Given a changing visual environment, and the limited capacity of visual working memory (VWM), the contents of VWM must be in constant flux. Using a change detection task, the authors show that VWM is subject to obligatory updating in the face of new information. Change detection performance is enhanced when the item that may change is retrospectively cued 1 s after memory encoding and 0.5 s before testing. The retro-cue benefit cannot be explained by memory decay or by a reduction in interference from other items held in VWM. Rather, orienting attention to a single memory item makes VWM more resistant to interference from the test probe. The authors conclude that the content of VWM is volatile unless it receives focused attention, and that the standard change detection task underestimates VWM capacity. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
The relationship between object files and visual working memory (VWM) was investigated in a new paradigm combining features of traditional VWM experiments (color change detection) and object-file experiments (memory for the properties of moving objects). Object-file theory was found to account for a key component of object-position binding in VWM: With motion, color memory came to be associated with the new locations of objects. However, robust binding to the original locations was found despite clear evidence that the objects had moved. This latter binding appears to constitute a scene-based component in VWM, which codes object location relative to the abstract spatial configuration of the display and is largely insensitive to the dynamic properties of objects. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
[Correction Notice: An erratum for this article was reported in Vol 36(4) of Journal of Experimental Psychology: Learning, Memory, and Cognition (see record 2010-12650-021). In the article, there was an error in the sixth sentence of the abstract. The sentence should read “Experiments 2 and 3 demonstrated that although identification was sensitive to orientation, visual priming was relatively invariant with image inversion (i.e., an image visually primed its inverted counterpart approximately as much as it primed itself).”] Object images are identified more efficiently after prior exposure. Here, the authors investigated shape representations supporting object priming. The dependent measure in all experiments was the minimum exposure duration required to correctly identify an object image in a rapid serial visual presentation stream. Priming was defined as the change in minimum exposure duration for identification as a function of prior exposure to an object. Experiment 1 demonstrated that this dependent measure yielded an estimate of predominantly visual priming (i.e., free of name and concept priming). Experiments 2 and 3 demonstrated that although priming was sensitive to orientation, visual priming was relatively invariant with image inversion (i.e., an image visually primed its inverted counterpart approximately as much as it primed itself). Experiment 4 demonstrated a similar dissociation with images rotated 90° off the upright. In all experiments, the difference in the magnitude of priming for identical or rotated–inverted priming conditions was marginal or nonexistent. These results suggest that visual representations that support priming can be relatively insensitive to picture-plane manipulations, although these manipulations have a substantial effect on object identification. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Humans have a massive capacity to store detailed information in visual long-term memory. The present studies explored the fidelity of these visual long-term memory representations and examined how conceptual and perceptual features of object categories support this capacity. Observers viewed 2,800 object images with a different number of exemplars presented from each category. At test, observers indicated which of 2 exemplars they had previously studied. Memory performance was high and remained quite high (82% accuracy) with 16 exemplars from a category in memory, demonstrating a large memory capacity for object exemplars. However, memory performance decreased as more exemplars were held in memory, implying systematic categorical interference. Object categories with conceptually distinctive exemplars showed less interference in memory as the number of exemplars increased. Interference in memory was not predicted by the perceptual distinctiveness of exemplars from an object category, though these perceptual measures predicted visual search rates for an object target among exemplars. These data provide evidence that observers' capacity to remember visual information in long-term memory depends more on conceptual structure than perceptual distinctiveness. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
The authors examined the prioritization of abruptly appearing objects in real-world scenes by measuring the eyes' propensity to be directed to the new object. New objects were fixated more often than chance whether they appeared during fixations (transient onsets) or saccades (nontransient onsets). However, onsets that appeared during fixations were fixated sooner and more often than those coincident with saccades. Prioritization of onsets during saccades, but not fixations, were affected by manipulations of memory: Reducing scene viewing time prior to the onset eliminated prioritization, whereas prior study of the scenes increased prioritization. Transient objects draw attention quickly and do not depend on memory, but without a transient signal, new objects are prioritized over several saccades as memory is used to explicitly identify the change. These effects were not modulated by observers' expectations concerning the appearance of new objects, suggesting the prioritization of a transient is automatic and that memory-guided prioritization is implicit. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号