首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
A sequential matching task was used to compare how the difficulty of shape discrimination influences the achievement of object constancy for depth rotations across haptic and visual object recognition. Stimuli were nameable, 3-dimensional plastic models of familiar objects (e.g., bed, chair) and morphs midway between these endpoint shapes (e.g., a bed–chair morph). The 2 objects presented on a trial were either both placed at the same orientation or were rotated by 90° relative to each other. Discrimination difficulty was increased by presenting more similarly shaped objects on mismatch trials (easy: bed, then lizard; medium: bed, then chair; hard: bed, then bed–chair morph). For within-modal visual matching, orientation changes were most disruptive when shape discrimination was hardest. This interaction for 3-dimensional objects replicated the interaction reported in earlier studies presenting 2-dimensional pictures of the same objects (Lawson & Bülthoff, 2008). In contrast, orientation changes and discrimination difficulty had additive effects on within-modal haptic and cross-modal visual-to-haptic matching, whereas cross-modal haptic-to-visual matching was orientation invariant. These results suggest that the cause of orientation sensitivity may differ for visual and haptic object recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Two experiments investigated whether infants would look longer at a rotating "drawbridge" that appeared to violate physical laws because they knew that it was causally impossible, as claimed by R. Baillargeon, E. S. Spelke, and S. Wasserman (1985) and R. Baillargeon (1987a). Using a habituation paradigm, they reported that infants looked longer at a display that appeared impossible (rotated 190° while an obstructing box was behind it) than at one that appeared possible (rotated only 112°, appearing to stop at the box). Experiment 1 eliminated habituation to 180° screen rotations. Still, infants looked longer at the 180° impossible rotations. Critically, however, infants also looked longer at possible 180° rotations in Experiment 2, in which no obstruction was present. Moreover, no difference in effect size was found between the 2 experiments. These findings indicate that infants' longer looking at 180° rotations is due to simple perceptual preference for more motion. They question R. Baillargeon's (1987a) claim that it is due to infants' representational reasoning about physically impossible object permanence events. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
In an earlier report (K. L. Harman, G. K. Humphrey, and M. A. Goodale, 1999), the authors demonstrated that Os who actively rotated 3-dimensional (3-D) novel objects on a computer screen later showed faster visual recognition of these objects than did Os who had passively viewed exactly the same sequence of images of these virtual objects. In Exp 1 of the present study, using 24 18–30 yr olds, the authors show that compared to passive viewing, active exploration of 3-D object structure led to faster performance on a "mental rotation" task involving the studied objects. They also examined how much time Os concentrated on particular views during active exploration. As found in the previous report, Os spent most of their time looking at the "side" and "front" views ("plan" views) of the objects, rather than the 3-quarter or intermediate views. This preference for the plan views of an object led to the examination of the possibility in Exp 2 that restricting the studied views in active exploration to either the plan views or the intermediate views would result in differential learning. 24 18–28 yr olds were used in Exp 2. It was found that recognition of objects was faster after active exploration limited to plan views than after active exploration of intermediate views. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Although both the object and the observer often move in natural environments, the effect of motion on visual object recognition has not been well documented. The authors examined the effect of a reversal in the direction of rotation on both explicit and implicit memory for novel, 3-dimensional objects. Participants viewed a series of continuously rotating objects and later made either an old-new recognition judgment or a symmetric-asymmetric decision. For both tasks, memory for rotating objects was impaired when the direction of rotation was reversed at test. These results demonstrate that dynamic information can play a role in visual object recognition and suggest that object representations can encode spatiotemporal information. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
The authors report the case of a woman with a right basal ganglia lesion and severe mental-rotation impairments. She had no difficulty recognizing rotated objects and had intact left-right orientation in egocentric space but was unable to map the left and right sides of external objects to her egocentric reference frame. This study indicates that the right basal ganglia may be critical components in a cortico-subcortical network involved in mental rotation. We speculate that the role of these structures is to select and maintain an appropriate motor program for performing smooth and accurate rotation. The results also have important implications for theories of object recognition by demonstrating that recognition of rotated objects can be achieved without mental rotation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Tested 63 kindergartners on a spatial perspective task in which they had to copy the location and orientation of objects when the model and response spaces were aligned or one was rotated 90 or 180.. There were very few errors when the spaces were aligned, and there were significantly more errors on the 180. than the 90. rotations. Egocentric responding dominated spatial responding on the 180. but was infrequent on the 90. rotations. These findings are explained as due to the symmetry relations between space and self for each perspective difference. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Four experiments used a 4-choice discrimination learning paradigm to explore the pigeon's recognition of line drawings of 4 objects (an airplane, a chair, a desk lamp, and a flashlight) that were rotated in depth. The pigeons reliably generalized discriminative responding to pictorial stimuli over all untrained depth rotations, despite the birds' having been trained at only a single depth orientation. These generalization gradients closely resembled those found in prior research that used other stimulus dimensions. Increasing the number of different vantage points in the training set from 1 to 3 broadened the range of generalized testing performance, with wider spacing of the training orientations more effectively broadening generalized responding. Template and geon theories of visual recognition are applied to these empirical results. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
9.
To explore whether effects observed in human object recognition represent fundamental properties of visual perception that are general across species, the authors trained pigeons (Columba livia) and humans to discriminate between pictures of 3-dimensional objects that differed in shape. Novel pictures of the depth-rotated objects were then tested for recognition. Across conditions, the object pairs contained either 0, 1, 3, or 5 distinctive pails. Pigeons showed viewpoint dependence in all object-part conditions, and their performance declined systematically with degree of rotation from the nearest training view. Humans showed viewpoint invariance for novel rotations between the training views but viewpoint dependence for novel rotations outside the training views. For humans, but not pigeons, viewpoint dependence was weakest in the 1-part condition. The authors discuss the results in terms of structural and multiple-view models of object recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Transformed spatial mappings were used to perturb normal visual–motor processes and reveal the structure of internal spatial representations used by the motor control system. In a 2-D discrete aiming task performed under rotated visual–motor mappings, the pattern of spatial movement error was the same for all Ss: peak error between 90° and 135° of rotation and low error for 180° rotation. A two-component spatial representation, based on oriented bidirectional movement axes plus direction of travel along such axes, is hypothesized. Observed reversals of movement direction under rotations greater than 90° are consistent with the hypothesized structure. Aiming error under reflections, unlike rotations, depended on direction of movement relative to the axis of reflection (H. A. Cunningham and M. Pavel, in press). RT and movement time effects were observed, but a speed-accuracy tradeoff was found only for rotations for which the direction-reversal strategy could be used. Finally, adaptation to rotation operates at all target locations equally but does not alter the relative difficulty of different rotations. Structural properties of the representation are invariant under learning. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Humans and pigeons were trained to discriminate between 2 views of actual 3-D objects or their photographs. They were tested on novel views that were either within the closest rotational distance between the training views (interpolated) or outside of that range (extrapolated). When training views were 60° apart, pigeons, but not humans, recognized novel views of actual objects better than their pictures. Further, both species recognized interpolated views of both stimulus types better than extrapolated views, but a single distinctive geon enhanced recognition of novel views only for humans. When training views were 90° apart, pigeons recognized interpolated views better than extrapolated views with actual objects but not with photographs. Thus, pigeons may represent actual objects differently than their pictures. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Pictures of handled objects such as a beer mug or frying pan are shown to prime speeded reach and grasp actions that are compatible with the object. To determine whether the evocation of motor affordances implied by this result is driven merely by the physical orientation of the object's handle as opposed to higher-level properties of the object, including its function, prime objects were presented either in an upright orientation or rotated 90° from upright. Rotated objects successfully primed hand actions that fit the object's new orientation (e.g., a frying pan rotated 90° so that its handle pointed downward primed a vertically oriented power grasp), but only when the required grasp was commensurate with the object's proper function. This constraint suggests that rotated objects evoke motor representations only when they afford the potential to be readily positioned for functional action. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

13.
Shape recognition can be achieved through vision or touch, raising the issue of how this information is shared across modalities. Here we provide a short review of previous findings on cross-modal object recognition and we provide new empirical data on multisensory recognition of actively explored objects. It was previously shown that, similar to vision, haptic recognition of objects fixed in space is orientation specific and that cross-modal object recognition performance was relatively efficient when these views of the objects were matched across the sensory modalities (Newell, Ernst, Tjan, & Bülthoff, 2001). For actively explored (i.e., spatially unconstrained) objects, we now found a cost in cross-modal relative to within-modal recognition performance. At first, this may seem to be in contrast to findings by Newell et al. (2001). However, a detailed video analysis of the visual and haptic exploration behaviour during learning and recognition revealed that one view of the objects was predominantly explored relative to all others. Thus, active visual and haptic exploration is not balanced across object views. The cost in recognition performance across modalities for actively explored objects could be attributed to the fact that the predominantly learned object view was not appropriately matched between learning and recognition test in the cross-modal conditions. Thus, it seems that participants naturally adopt an exploration strategy during visual and haptic object learning that involves constraining the orientation of the objects. Although this strategy ensures good within-modal performance, it is not optimal for achieving the best recognition performance across modalities. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
This study assessed the contribution of edge and surface cues on object representation in macaques (Macaca mulatta). In Experiments 1 and 2, 5 macaques were trained to discriminate 4 simple volumetric objects (geons) and were subsequently tested for their ability to recognize line drawings, silhouettes, and light changes of these geons. Performance was above chance in all test conditions and was similarly high for the line drawings and silhouettes of geons, suggesting the use of the outline shape to recognize the original objects. In addition, transfer for the geons seen under new lighting was greater than for the other stimuli, stressing the importance of the shading information. Experiment 3, using geons filled with new textures, showed that a radical change in the surface cues does not prevent object recognition. It is concluded that these findings support a surface-based theory of object recognition in macaques, although it does not exclude the contribution of edge cues, especially when surface details are not available. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
16.
Examined how novel, 3-dimensional shapes are represented in long-term memory and how this might be differentially affected by monocular and binocular viewing in 3 experiments with a total of 141 undergraduates. Exp 1 established that slide projections of the novel objects could be recognized readily if seen in the same orientation as seen during learning. Exps 2 and 3 examined generalization to novel depth rotations of the objects. Results are consistent with a growing body of recent research showing that, at least under certain conditions, the visual system stores viewpoint-specific representations of objects. (French abstract) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
This functional MRI study examined how people mentally rotate a 3-dimensional object (an alarm clock) that is retrieved from memory and rotated according to a sequence of auditory instructions. We manipulated the geometric properties of the rotation, such as having successive rotation steps around a single axis versus alternating between 2 axes. The latter condition produced much more activation in several areas. Also, the activation in several areas increased with the number of rotation steps. During successive rotations around a single axis, the activation was similar for rotations in the picture plane and rotations in depth. The parietal (but not extrastriate) activation was similar to mental rotation of a visually presented object. The findings indicate that a large-scale cortical network computes different types of spatial information by dynamically drawing on each of its components to a differential, situation-specific degree. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Presented 2-dimensional computer-generated representations of 3-dimensional objects in pairs to 20 male and 20 female right-handed undergraduates. Ss were given 15 sec to make a same-different judgment of the objects, one of which was rotated 0°, 40°, 80°, 120°, or 160° from the other. Ss were also assessed on 2 standard spatial ability tests (the Spatial Relations subtest of the Differential Aptitude Tests, Form L, and the Standardized Road-Map Test of Direction Sense) and a verbal-imagery questionnaire. Analyses of the data showed that men were more accurate than women, and that the slope of the function relating response time to degree of rotation was steeper in women. There was a significant linear relation between performance and the degree of rotation. Rate of rotation and accuracy correlated with the other tests of spatial ability. Response time slope correlated with imagery in men but not in women, suggesting that frequent use of visual imagery was related to mental rotation rate in men, but not in women. There were no clear relations between performance and the strategy Ss professed to use in doing the mental rotation. (French summary) (18 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Animal models have been central to advances made in understanding the neural basis of human cognition, but maximizing the use of animal models requires tasks that match those used to assess human subjects. Tasks used in humans frequently use visual 2-dimensional stimuli, assess 1-trial learning, and require little pretraining. This article describes novel versions of 2 tasks for the rat, spontaneous object recognition and spontaneous oddity preference, both of which use purely visual, 2-dimensional picture-card stimuli, test 1-trial learning, and require no pretraining. Rats showed robust memory for a variety of picture-card stimuli, demonstrating almost no loss of memory for some of the stimulus types even after a 2-hr delay period. Rats were able to show spontaneous oddity preference for all 3 visual stimulus types tested (photos, shapes, and patterns), as well as for 3-dimensional objects. These 2 tasks are quick to administer, involve no fearful learning associations, and require a simple apparatus. They may be particularly useful for high-throughput pharmacological or genetic screening using rodent models. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
This study investigated whether and how visual representations of individual objects are bound in memory to scene context. Participants viewed a series of naturalistic scenes, and memory for the visual form of a target object in each scene was examined in a 2-alternative forced-choice test, with the distractor object either a different object token or the target object rotated in depth. In Experiments 1 and 2, object memory performance was more accurate when the test object alternatives were displayed within the original scene than when they were displayed in isolation, demonstrating object-to-scene binding. Experiment 3 tested the hypothesis that episodic scene representations are formed through the binding of object representations to scene locations. Consistent with this hypothesis, memory performance was more accurate when the test alternatives were displayed within the scene at the same position originally occupied by the target than when they were displayed at a different position. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号