首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In 3 experiments, the question of viewpoint dependency in mental representations of dynamic scenes was addressed. Participants viewed film clips of soccer episodes from 1 or 2 viewpoints; they were then required to discriminate between video stills of the original episode and distractors. Recognition performance was measured in terms of accuracy and speed. The degree of viewpoint deviation between the initial presentation and the test stimuli was varied, as was both the point of time presented by the video stills and participants' soccer expertise. Findings suggest that viewers develop a viewpoint-dependent mental representation similar to the spatial characteristics of the original episode presentation, even if the presentation was spatially inhomogeneous. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Real-world objects can be viewed at a range of distances and thus can be experienced at a range of visual angles within the visual field. Given the large amount of visual size variation possible when observing objects, we examined how internal object representations represent visual size information. In a series of experiments which required observers to access existing object knowledge, we observed that real-world objects have a consistent visual size at which they are drawn, imagined, and preferentially viewed. Importantly, this visual size is proportional to the logarithm of the assumed size of the object in the world, and is best characterized not as a fixed visual angle, but by the ratio of the object and the frame of space around it. Akin to the previous literature on canonical perspective, we term this consistent visual size information the canonical visual size. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Five experiments demonstrated that adults can identify certain novel views of 3-dimensional model objects on the basis of knowledge of a single perspective. Geometrically irregular contour (wire) and surface (clay) objects and geometrically regular surface (pipe) objects were accurately recognized when rotated 180° about the vertical (y) axis. However, recognition accuracy was poor for all types of objects when rotated around the y-axis by 90°. Likewise, more subtle rotations in depth (i.e., 30° and 60°) induced decreases in recognition of both contour and surface objects. These results suggest that accurate recognition of objects rotated in depth by 180° may be achieved through use of information in objects' 2-dimensional bounding contours, the shapes of which remain invariant over flips in depth. Consistent with this interpretation, a final study showed that even slight rotations away from 180° cause precipitous drops in recognition accuracy. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
In contextual cueing, the position of a target within a group of distractors is learned over repeated exposure to a display with reference to a few nearby items rather than to the global pattern created by the elements. The authors contrasted the role of global and local contexts for contextual cueing in naturalistic scenes. Experiment 1 showed that learned target positions transfer when local information is altered but not when global information is changed. Experiment 2 showed that scene-target covariation is learned more slowly when local, but not global, information is repeated across trials than when global but not local information is repeated. Thus, in naturalistic scenes, observers are biased to associate target locations with global contexts. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
The authors used a recognition memory paradigm to assess the influence of color information on visual memory for images of natural scenes. Subjects performed 5-10% better for colored than for black-and-white images independent of exposure duration. Experiment 2 indicated little influence of contrast once the images were suprathreshold, and Experiment 3 revealed that performance worsened when images were presented in color and tested in black and white, or vice versa, leading to the conclusion that the surface property color is part of the memory representation. Experiments 4 and 5 exclude the possibility that the superior recognition memory for colored images results solely from attentional factors or saliency. Finally, the recognition memory advantage disappears for falsely colored images of natural scenes: The improvement in recognition memory depends on the color congruence of presented images with learned knowledge about the color gamut found within natural scenes. The results can be accounted for within a multiple memory systems framework. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
In a contextual cuing paradigm, we examined how memory for the spatial structure of a natural scene guides visual search. Participants searched through arrays of objects that were embedded within depictions of real-world scenes. If a repeated search array was associated with a single scene during study, then array repetition produced significant contextual cuing. However, expression of that learning was dependent on instantiating the original scene in which the learning occurred: Contextual cuing was disrupted when the repeated array was transferred to a different scene. Such scene-specific learning was not absolute, however. Under conditions of high scene variability, repeated search array were learned independently of the scene background. These data suggest that when a consistent environmental structure is available, spatial representations supporting visual search are organized hierarchically, with memory for functional subregions of an environment nested within a representation of the larger scene. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
In the current study, the authors investigated whether the ground dominance effect (the use of ground surface information for the perceptual organization of scenes) varied with age. In Experiment 1, a scene containing a ground, a ceiling, and 2 vertical posts was presented. The scene was either in its normal orientation or rotated to the side. In Experiment 2, a blue dot was attached to each post, with location varied from bottom to top of the posts. In Experiment 3, a scene similar to that in Experiment 1 was presented in different locations in visual field. Observers judged which of the 2 objects (posts in Experiments 1 and 3, blue dots in Experiment 2) appeared to be closer. The results indicated that both younger (mean age = 22 years) and older observers (mean age = 73 years) responded consistently with the ground dominance effect. However, the magnitude of the effect decreased for older observers. These results suggest a decreased use of ground surface information by older observers for the perceptual organization of scene layout. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
We compared the primacy of affective versus semantic categorization by using forced-choice saccadic and manual response tasks. Participants viewed paired emotional and neutral scenes involving humans or animals flashed rapidly in extrafoveal vision. Participants were instructed to categorize the targets by saccading toward the location occupied by a predefined target scene. The affective task involved saccading toward an unpleasant or pleasant scene, and the semantic task involved saccading toward a scene containing an animal. Both affective and semantic target scenes could be reliably categorized in less than 220 ms, but semantic categorization was always faster than affective categorization. This finding was replicated with singly, foveally presented scenes and manual responses. In comparison with foveal presentation, extrafoveal presentation slowed down the categorization of affective targets more than that of semantic targets. Exposure threshold for accurate categorization was lower for semantic information than for affective information. Superordinate-, basic-, and subordinate-level semantic categorizations were faster than affective evaluation. We conclude that affective analysis of scenes cannot bypass object recognition. Rather, semantic categorization precedes and is required for affective evaluation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants performed an unanticipated 2nd block of trials. When the same scene occurred in the 2nd block, the change within the scene was (a) identical to the original change, (b) a new object appearing in the original change location, (c) the same object appearing in a new location, or (d) a new object appearing in a new location. Results suggest that attention is rapidly allocated to previously relevant locations and then to previously relevant objects. This pattern of locations dominating objects remained when object identity information was made more salient. Eye tracking verified that scene memory results in more direct scan paths to previously relevant locations and objects. This contextual guidance suggests that a high-capacity long-term memory for scenes is used to insure that limited attentional capacity is allocated efficiently rather than being squandered. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
The nature of the information retained from previously fixated (and hence attended) objects in natural scenes was investigated. In a saccade-contingent change paradigm, participants successfully detected type and token changes (Experiment 1) or token and rotation changes (Experiment 2) to a target object when the object had been previously attended but was no longer within the focus of attention when the change occurred. In addition, participants demonstrated accurate type-, token-, and orientation-discrimination performance on subsequent long-term memory tests (Experiments 1 and 2) and during online perceptual processing of a scene (Experiment 3). These data suggest that relatively detailed visual information is retained in memory from previously attended objects in natural scenes. A model of scene perception and long-term memory is proposed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
The authors report on different methods to probe the structure of visually perceived surfaces in 3 dimensions. The surfaces are specified by patterns of shading with Lambertian and specular components, which deform over time and over stereoscopic views. Five observers performed 2 probe tasks, 1 involving the adjustment of a punctate probe so as to be on the apparent surface and the other involving the adjustment of a small gauge figure that indicates surface attitude. The authors found that these rather different methods yielded essentially identical depth maps up to linear transformations and that the observers all deviate slightly from veridicality in basically identical ways. The nature of this deviation appears to be correlated with the rough topography of the specularities. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
60 college students are presented, tachistoscopically, a list of 7 pleasant and 7 unpleasant 5 letter words which had been matched for frequency. All of the words are presented randomly to the subjects for various lengths of time, the Ss recording the word they believe to have been flashed on the screen before them. Significantly fewer errors of recognition are made on the pleasant words than on the unpleasant words. It is concluded that perceptual behavior, here defined as visual recognition thresholds, is influenced by the pleasantness or unpleasantness of the stimuli. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
To explore whether effects observed in human object recognition represent fundamental properties of visual perception that are general across species, the authors trained pigeons (Columba livia) and humans to discriminate between pictures of 3-dimensional objects that differed in shape. Novel pictures of the depth-rotated objects were then tested for recognition. Across conditions, the object pairs contained either 0, 1, 3, or 5 distinctive pails. Pigeons showed viewpoint dependence in all object-part conditions, and their performance declined systematically with degree of rotation from the nearest training view. Humans showed viewpoint invariance for novel rotations between the training views but viewpoint dependence for novel rotations outside the training views. For humans, but not pigeons, viewpoint dependence was weakest in the 1-part condition. The authors discuss the results in terms of structural and multiple-view models of object recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
The authors studied the influence of canonical orientation on visual search for object orientation. Displays consisted of pictures of animals whose axis of elongation was either vertical or tilted in their canonical orientation. Target orientation could be either congruent or incongruent with the object's canonical orientation. In Experiment 1, vertical canonical targets were detected faster when they were tilted (incongruent) than when they were vertical (congruent). This search asymmetry was reversed for tilted canonical targets. The effect of canonical orientation was partially preserved when objects were high-pass filtered, but it was eliminated when they were low-pass filtered, rendering them as unfamiliar shapes (Experiment 2). The effect of canonical orientation was also eliminated by inverting the objects (Experiment 3) and in a patient with visual agnosia (Experiment 4). These results indicate that orientation search with familiar objects can be modulated by canonical orientation, and they indicate a top-down influence on orientation processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Rhesus monkeys (Macaca mulatta) were taught a large number of visual discriminations and then either received bilateral removal of the perirhinal cortex or were retained as unoperated controls. Operated monkeys were impaired in retention of the preoperatively learned problems. To test for generalization to novel views, the monkeys were required to discriminate, in probe trials, familiar pairs of images that were rotated, enlarged, shrunken, presented with color deleted, or degraded by masks. Although these manipulations reduced accuracy in both groups, the operated group was not differentially affected. In contrast, the same operated monkeys were impaired in reversal of familiar discriminations and in acquisition of new single-pair discriminations. These results indicate an important role for perirhinal cortex in visual learning, memory, or both, and show that under a variety of conditions, perirhinal cortex is not critical for the identification of stimuli. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
The authors investigated the effects of changes in horizontal viewing angle on visual and audiovisual speech recognition in 4 experiments, using a talker's face viewed full face, three quarters, and in profile. When only experimental items were shown (Experiments 1 and 2), identification of unimodal visual speech and visual speech influences on congruent and incongruent auditory speech were unaffected by viewing angle changes. However, when experimental items were intermingled with distractor items (Experiments 3 and 4), identification of unimodal visual speech decreased with profile views, whereas visual speech influences on congruent and incongruent auditory speech remained unaffected by viewing angle changes. These findings indicate that audiovisual speech recognition withstands substantial changes in horizontal viewing angle, but explicit identification of visual speech is less robust. Implications of this distinction for understanding the processes underlying visual and audiovisual speech recognition are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
The orthographic uniqueness point (OUP) of a word is the position of the first letter from the left that distinguishes a word from all other words. In 2 recent studies (P. J. Kwantes & D. J. K. Mewhort, 1999a; A. K. Lindell, M. E. R. Nicholls, & A. E. Castles, 2003), it has been observed that words with an early OUP were processed more quickly than words with a late OUP. This has been taken to suggest that observers process the letters of words sequentially in a left-to-right order. In this article, it is shown that the OUP results do not provide selective evidence for left-to-right sequential processing in visual word recognition because the data are also compatible with an account in which letter processing occurs in random order. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Shape recognition can be achieved through vision or touch, raising the issue of how this information is shared across modalities. Here we provide a short review of previous findings on cross-modal object recognition and we provide new empirical data on multisensory recognition of actively explored objects. It was previously shown that, similar to vision, haptic recognition of objects fixed in space is orientation specific and that cross-modal object recognition performance was relatively efficient when these views of the objects were matched across the sensory modalities (Newell, Ernst, Tjan, & Bülthoff, 2001). For actively explored (i.e., spatially unconstrained) objects, we now found a cost in cross-modal relative to within-modal recognition performance. At first, this may seem to be in contrast to findings by Newell et al. (2001). However, a detailed video analysis of the visual and haptic exploration behaviour during learning and recognition revealed that one view of the objects was predominantly explored relative to all others. Thus, active visual and haptic exploration is not balanced across object views. The cost in recognition performance across modalities for actively explored objects could be attributed to the fact that the predominantly learned object view was not appropriately matched between learning and recognition test in the cross-modal conditions. Thus, it seems that participants naturally adopt an exploration strategy during visual and haptic object learning that involves constraining the orientation of the objects. Although this strategy ensures good within-modal performance, it is not optimal for achieving the best recognition performance across modalities. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Martha J. Farah.     
Recognizes Martha J. Farah for rigorous empirical and theoretical analysis of visual cognition, in which understanding of normal function and analysis of neurological deficits illuminate and strengthen one another. Applying diverse experimental techniques within an integrated approach to the study of mind and brain, she has sharpened and helped to answer fundamental questions about how humans generate and manipulate visual images, recognize objects, and attend to positions in space. Along with a citation, a biography is presented for Farah, as well as a bibliography of her works. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Nine experiments examined the means by which visual memory for individual objects is structured into a larger representation of a scene. Participants viewed images of natural scenes or object arrays in a change detection task requiring memory for the visual form of a single target object. In the test image, 2 properties of the stimulus were independently manipulated: the position of the target object and the spatial properties of the larger scene or array context. Memory performance was higher when the target object position remained the same from study to test. This same-position advantage was reduced or eliminated following contextual changes that disrupted the relative spatial relationships among contextual objects (context deletion, scrambling, and binding change) but was preserved following contextual change that did not disrupt relative spatial relationships (translation). Thus, episodic scene representations are formed through the binding of objects to scene locations, and object position is defined relative to a larger spatial representation coding the relative locations of contextual objects. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号