首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The results of two experiments suggest that strong constraints on the ability to imagine rotations extend to the perception of rotations. Participants viewed stereographic perspective views of rotating squares, regular polyhedra, and a variety of polyhedral generalized cones, and attempted to indicate the orientation of the axis and planes of rotation in terms of one of the 13 canonical directions in 3D space. When the axis and planes of a rotation were aligned with principal directions of the environment, participants could indicate the orientation of the motion well. When a rotation was oblique to the environment, the orientation of the object to the motion made a very large difference to performance. Participants were fast and accurate when the object was a generalized cone about the axis of rotation or was elongated along the axis. Variation of the amount of rotation and reflection symmetry of the object about the axis of rotation was not powerful. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
The kinematic bases by which humans imagine an object turn from one orientation to another are unknown. The studies reported here show that individuals of high spatial ability are, in most cases, unable to imagine a Shepard–Metzler object rotate about an axis and angle so as to accurately envision its appearance. Nor can they conceive of the axis and angle by which it would rotate in a shortest path between two orientations. Accuracy progressively improves across cases in which neither angle, one of the angles, or both angles between the rotation axis and viewer–environment frame and between the axis and object limb are canonical. When canonical, the angles are more accurately observed from one viewpoint to hold constant in rotation. Such inability, with rare exceptions, is probably true for other kinematic operations requiring fine control of multiple spatial relations. Objects' orientations are not readily represented in terms of shortest path axis and angle. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Previous research on spatial memory indicated that memories of small layouts were orientation dependent (orientation specific) but that memories of large layouts were orientation independent (orientation free). Two experiments investigated the relation between layout size and orientation dependency. Participants learned a small or a large 4-point path (Experiment 1) or a large display of objects (Experiment 2) and then made judgments of relative direction from imagined headings that were either the same as or different from the single studied orientation. Judgments were faster and more accurate when the imagined heading was the same as the studied orientation (i.e., aligned) than when the imagined heading differed from the studied orientation (i.e., misaligned). This alignment effect was present for both small and large layouts. These results indicate that location is encoded in an orientation-dependent manner regardless of layout size.  相似文献   

4.
Previous research on spatial memory indicated that memories of small layouts were orientation dependent (orientation specific) but that memories of large layouts were orientation independent (orientation free). Two experiments investigated the relation between layout size and orientation dependency. Participants learned a small or a large 4-point path (Experiment 1) or a large display of objects (Experiment 2) and then made judgments of relative direction from imagined headings that were either the same as or different from the single studied orientation. Judgments were faster and more accurate when the imagined heading was the same as the studied orientation (i.e., aligned) than when the imagined heading differed from the studied orientation (i.e., misaligned). This alignment effect was present for both small and large layouts. These results indicate that location is encoded in an orientation-dependent manner regardless of layout size. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Four experiments investigated the conditions contributing to sensorimotor alignment effects (i.e., the advantage for spatial judgments from imagined perspectives aligned with the body). Through virtual reality technology, participants learned object locations around a room (learning room) and made spatial judgments from imagined perspectives aligned or misaligned with their actual facing direction. Sensorimotor alignment effects were found when testing occurred in the learning room but not after walking 3 m into a neighboring (novel) room. Sensorimotor alignment effects returned after returning to the learning room or after providing participants with egocentric imagery instructions in the novel room. Additionally, visual and spatial similarities between the test and learning environments were independently sufficient to cause sensorimotor alignment effects. Memory alignment effects, independent from sensorimotor alignment effects, occurred in all testing conditions. Results are interpreted in the context of two-system spatial memory theories positing separate representations to account for sensorimotor and memory alignment effects. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
This research concerned the use of mental rotation in recognizing rotated objects. Instead of the classic Shepard's paradigm in which subjects were still while observing rotated objects, here subjects had to move (or imagine moving) around stationary three-dimensional objects put in the middle of the trajectory. Thus, depending on the viewing positions, such objects were seen under six different perspectives (from 30 degrees to 180 degrees). The latter task has been thought to be closer to everyday life in which we obtain information regarding objects from their spatial properties. The results do not follow the classic rules of mental rotation of an object predicting a linear increase of the time needed to recognize distorted objects as a function of their angular displacement. They also differ from data in the literature about spatial imagery showing that access to spatial information is facilitated more when people actually move through a path than when they imagine moving. A probable explanation of this difference from the literature is discussed in relation to the particular involvement of the body in the experimental task.  相似文献   

7.
We examined automatic spatial alignment effects evoked by handled objects. Using color as the relevant cue carried by an irrelevant handled object aligned or misaligned with the response hand, responses to color were faster when the handle aligned with the response hand. Alignment effects were observed only when the task was to make a reach and grasp response. No alignment effects occurred if the response involved a left?right key press. Alignment effects emerged over time, becoming more apparent either when the color cue was delayed or when relatively long, rather than short, response times were analyzed. These results are consistent with neurophysiological evidence indicating that the cued goal state has a modulatory influence on sensorimotor representations, and that handled objects initially generate competition between neural populations coding for a left- or right-handed action that must be resolved before a particular hand is favored. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Given the distributed representation of visual features in the human brain, binding mechanisms are necessary to integrate visual information about the same perceptual event. It has been assumed that feature codes are bound into object files—pointers to the neural codes of the features of a given event. The present study investigated the perceptual criteria underlying integration into an object file. Previous studies confounded the sharing of spatial location with belongingness to the same perceptual object, 2 factors we tried to disentangle. Our findings suggest that orientation and color features appearing in a task-irrelevant preview display were integrated irrespective of whether they appeared as part of the same object or of different objects (e.g., 1 stationary and the other moving continuously, or a banana in a particular orientation overlaying an apple of a particular color). In contrast, integration was markedly reduced when the 2 objects were separated in space. Taken together, these findings suggest that spatial overlap of visual features is a sufficient criterion for integrating them into the same object file. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Investigated the ability of 20-, 60-, and 70-yr-olds (12 men and 12 women at each age level) to mentally manipulate spatial information in a large-scale environment. In a perspective-taking task, Ss were asked to determine the location of target objects from imagined locations. In an array rotation task, Ss were asked to imagine that the array of objects rotated relative to their current position. Young and elderly Ss performed with equivalent accuracy on the array rotation task, but young Ss were more accurate on the perspective-taking task. Ss who were taken to each object location prior to testing were more accurate in the perspective-taking task than Ss who did not have this experience. There was no effect of prior experience on performance in the array rotation task. (4 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
11.
Four experiments examined reference systems in spatial memories acquired from language. Participants read narratives that located 4 objects in canonical (front, back, left, right) or noncanonical (left front, right front, left back, right back) positions around them. Participants' focus of attention was first set on each of the 4 objects, and then they were asked to report the name of the object at the location indicated by a direction word or an iconic arrow. The results indicated that spatial memories were represented in terms of intrinsic (object-to-object) reference systems, which were selected using egocentric cues (e.g., alignment with body axes). Results also indicated that linguistic direction cues were comprehended in terms of egocentric reference systems, whereas iconic arrows were not. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
The effects of lesions centred in the perirhinal cortex region (Prh) or in both the perirhinal cortex region and the fornix (Prh + Fx) were assessed in two different working memory tasks, one spatial the other nonspatial. For the spatial task the rats were tested in an eight arm radial maze, using a standard procedure in which they were rewarded for avoiding previously visited arms. The Prh + Fx, but not the Prh, rats produced significantly more errors (re-entries) and these started significantly earlier in each session when compared with a surgical control group. The nonspatial task was a test of spontaneous object recognition in which rats were tested on their ability to discriminate between a familiar and a novel object. For the initial tests the Prh group failed to discriminate between the objects, but the Prh + Fx group showed a clear preference for the novel object. Observation of the test showed, however, that the Prh + Fx group were spending a greater length of time initially exploring the sample (familiar) object. When the amount of exposure to the sample object was limited to either 20 or 40 s (i.e. was the same for all three groups), the Prh + Fx group now failed to discriminate between the two objects. This change was especially evident for shorter sample duration (20 s). The Prh group did, however, show an amelioration of their deficit with this further testing. The present results support previous dissociation between spatial and nonspatial working memory, and indicate that there may be some recovery of function following perirhinal cortical damage.  相似文献   

13.
Tested 63 kindergartners on a spatial perspective task in which they had to copy the location and orientation of objects when the model and response spaces were aligned or one was rotated 90 or 180.. There were very few errors when the spaces were aligned, and there were significantly more errors on the 180. than the 90. rotations. Egocentric responding dominated spatial responding on the 180. but was infrequent on the 90. rotations. These findings are explained as due to the symmetry relations between space and self for each perspective difference. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
A sequential matching task was used to compare how the difficulty of shape discrimination influences the achievement of object constancy for depth rotations across haptic and visual object recognition. Stimuli were nameable, 3-dimensional plastic models of familiar objects (e.g., bed, chair) and morphs midway between these endpoint shapes (e.g., a bed–chair morph). The 2 objects presented on a trial were either both placed at the same orientation or were rotated by 90° relative to each other. Discrimination difficulty was increased by presenting more similarly shaped objects on mismatch trials (easy: bed, then lizard; medium: bed, then chair; hard: bed, then bed–chair morph). For within-modal visual matching, orientation changes were most disruptive when shape discrimination was hardest. This interaction for 3-dimensional objects replicated the interaction reported in earlier studies presenting 2-dimensional pictures of the same objects (Lawson & Bülthoff, 2008). In contrast, orientation changes and discrimination difficulty had additive effects on within-modal haptic and cross-modal visual-to-haptic matching, whereas cross-modal haptic-to-visual matching was orientation invariant. These results suggest that the cause of orientation sensitivity may differ for visual and haptic object recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
The time to name two-dimensional line drawings of objects increases linearly for object rotations between 00 and 1200 from the upright. Several theories attribute these effects of orientation to finding the top or the top-bottom axis of objects. By this account, prior knowledge of the location of the top or the top-bottom axis of objects should diminish effects of object orientation when they are named. When this hypothesis was tested by cuing the top or the top-bottom axis, no reduction in the effects of orientation on object naming was found. This result is inconsistent with effects of orientation on object naming being due to finding the top or the top-bottom axis. Instead, the top may be found prior to rotational normalization of the object image. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Shape recognition can be achieved through vision or touch, raising the issue of how this information is shared across modalities. Here we provide a short review of previous findings on cross-modal object recognition and we provide new empirical data on multisensory recognition of actively explored objects. It was previously shown that, similar to vision, haptic recognition of objects fixed in space is orientation specific and that cross-modal object recognition performance was relatively efficient when these views of the objects were matched across the sensory modalities (Newell, Ernst, Tjan, & Bülthoff, 2001). For actively explored (i.e., spatially unconstrained) objects, we now found a cost in cross-modal relative to within-modal recognition performance. At first, this may seem to be in contrast to findings by Newell et al. (2001). However, a detailed video analysis of the visual and haptic exploration behaviour during learning and recognition revealed that one view of the objects was predominantly explored relative to all others. Thus, active visual and haptic exploration is not balanced across object views. The cost in recognition performance across modalities for actively explored objects could be attributed to the fact that the predominantly learned object view was not appropriately matched between learning and recognition test in the cross-modal conditions. Thus, it seems that participants naturally adopt an exploration strategy during visual and haptic object learning that involves constraining the orientation of the objects. Although this strategy ensures good within-modal performance, it is not optimal for achieving the best recognition performance across modalities. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
People are able to experience in (a) a reflective mode of consciousness, in which they are aware of themselves and elements of their environment as objects and (b) a nonreflective mode, in which they experience themselves as at-one with the contents of consciousness. The suicidal individual is seen as identified with the reflective self and alienated by shame and fear from nonreflective being. Nonreflectiveness amounts to the nonexistence of self as an existing object. But the intellect, identified with the object self, confuses the nonreflective extinction of the object self with physical death. To imagine death as an absence of consciousness would seem an impossibility since imagination is an act of consciousness. Hence, the fantasy of death is more likely to be a fantasy of nonresponsiveness to the world of objects. Suicidal thoughts and feelings are viewed as a symbolic expression of the desire to function nonreflectively and the frustration at being unable to do so. E. S. Schneidman's (1965) "depressed, defiant, and dependent dissatisfied" suicidal types are seen as categories of defensive maneuvers against fears of nonreflective functioning. Much cognitive mythology exists which equates nonreflective functioning with irrationality and being out-of-control. Therapists must understand in themselves and validate in the people with whom they work the need for nonreflective functioning. (7 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Children's grasp of make-believe transformations was studied. In Exp 1, children saw an adult enact a pretend change (e.g., sprinkling pretend talcum powder over a toy cat). They indicated the pretend outcome by choosing between a picture depicting no change (e.g., cat without talcum powder on its body) and a picture depicting the pretend change (e.g., cat covered with talcum powder). Older children (M?=?29 mo) chose correctly, but younger children (M?=?21 mo) did not. A similar age change emerged in Exp 2 despite the addition of a picture of an irrelevant transformation. Exp 3 showed that children with autism can imagine pretend transformations. Implications for children's imagination and the autistic syndrome are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
The cognitive advantage of imagined spatial transformations of the human body over that of more unfamiliar objects (e.g., Shepard-Metzler [S-M] cubes) is an issue for validating motor theories of visual perception. In 6 experiments, the authors show that providing S-M cubes with body characteristics (e.g., by adding a head to S-M cubes to evoke a posture) facilitates the mapping of the cognitive coordinate system of one's body onto the abstract shape. In turn, this spatial embodiment improves object shape matching. Thanks to the increased cohesiveness of human posture in people's body schema, imagined transformations of the body operate in a less piecemeal fashion as compared with objects (S-M cubes or swing-arm desk lamps) under a similar spatial configuration, provided that the pose can be embodied. If the pose cannot be emulated (covert imitation) by the sensorimotor system, the facilitation due to motoric embodiment will also be disrupted. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Pictures of handled objects such as a beer mug or frying pan are shown to prime speeded reach and grasp actions that are compatible with the object. To determine whether the evocation of motor affordances implied by this result is driven merely by the physical orientation of the object's handle as opposed to higher-level properties of the object, including its function, prime objects were presented either in an upright orientation or rotated 90° from upright. Rotated objects successfully primed hand actions that fit the object's new orientation (e.g., a frying pan rotated 90° so that its handle pointed downward primed a vertically oriented power grasp), but only when the required grasp was commensurate with the object's proper function. This constraint suggests that rotated objects evoke motor representations only when they afford the potential to be readily positioned for functional action. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号