首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
There is evidence that complex objects are decomposed by the visual system into features, such as shape and color. Consistent with this theory is the phenomenon of illusory conjunctions, which occur when features are incorrectly combined to form an illusory object. We analyzed the perceived location of illusory conjunctions to study the roles of color and shape in the location of visual objects. In Experiments 1 and 2, participants located illusory conjunctions about halfway between the veridical locations of the component features. Experiment 3 showed that the distribution of perceived locations was not the mixture of two distributions centered at the 2 feature locations. Experiment 4 replicated these results with an identification task rather than a detection task. We concluded that the locations of illusory conjunctions were not arbitrary but were determined by both constituent shape and color. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
3.
Given the distributed representation of visual features in the human brain, binding mechanisms are necessary to integrate visual information about the same perceptual event. It has been assumed that feature codes are bound into object files—pointers to the neural codes of the features of a given event. The present study investigated the perceptual criteria underlying integration into an object file. Previous studies confounded the sharing of spatial location with belongingness to the same perceptual object, 2 factors we tried to disentangle. Our findings suggest that orientation and color features appearing in a task-irrelevant preview display were integrated irrespective of whether they appeared as part of the same object or of different objects (e.g., 1 stationary and the other moving continuously, or a banana in a particular orientation overlaying an apple of a particular color). In contrast, integration was markedly reduced when the 2 objects were separated in space. Taken together, these findings suggest that spatial overlap of visual features is a sufficient criterion for integrating them into the same object file. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Working memory can be divided into separate subsystems for verbal and visual information. Although the verbal system has been well characterized, the storage capacity of visual working memory has not yet been established for simple features or for conjunctions of features. The authors demonstrate that it is possible to retain information about only 3–4 colors or orientations in visual working memory at one time. Observers are also able to retain both the color and the orientation of 3–4 objects, indicating that visual working memory stores integrated objects rather than individual features. Indeed, objects defined by a conjunction of four features can be retained in working memory just as well as single-feature objects, allowing many individual features to be retained when distributed across a small number of objects. Thus, the capacity of visual working memory must be understood in terms of integrated objects rather than individual features. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Pictures of handled objects such as a beer mug or frying pan are shown to prime speeded reach and grasp actions that are compatible with the object. To determine whether the evocation of motor affordances implied by this result is driven merely by the physical orientation of the object's handle as opposed to higher-level properties of the object, including its function, prime objects were presented either in an upright orientation or rotated 90° from upright. Rotated objects successfully primed hand actions that fit the object's new orientation (e.g., a frying pan rotated 90° so that its handle pointed downward primed a vertically oriented power grasp), but only when the required grasp was commensurate with the object's proper function. This constraint suggests that rotated objects evoke motor representations only when they afford the potential to be readily positioned for functional action. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

6.
The capacity of visual working memory for features and conjunctions   总被引:1,自引:0,他引:1  
Short-term memory storage can be divided into separate subsystems for verbal information and visual information, and recent studies have begun to delineate the neural substrates of these working-memory systems. Although the verbal storage system has been well characterized, the storage capacity of visual working memory has not yet been established for simple, suprathreshold features or for conjunctions of features. Here we demonstrate that it is possible to retain information about only four colours or orientations in visual working memory at one time. However, it is also possible to retain both the colour and the orientation of four objects, indicating that visual working memory stores integrated objects rather than individual features. Indeed, objects defined by a conjunction of four features can be retained in working memory just as well as single-feature objects, allowing sixteen individual features to be retained when distributed across four objects. Thus, the capacity of visual working memory must be understood in terms of integrated objects rather than individual features, which places significant constraints on cognitive and neurobiological models of the temporary storage of visual information.  相似文献   

7.
The geometric relation between physical and perceived space as specified by binocular stereopsis and structure from motion was investigated. Four experimental tasks were used, each of which required a different aspect of three-dimensional (3-D) structure to be performed accurately. To examine whether the transformation between physical and perceptual space preserved the 3-D structural properties required to perform each of our tasks, the constancy of judged shape over changes in a depicted object's viewing distance or orientation was examined. Our results reveal that observers' judgments of 3-D shape from binocular stereopsis and motion contained systematic distortions: Perceived 3-D shape from motion was not invariant over orientation change and perceived 3-D structure from stereo, and motion and stereo in combination was not invariant over changes in viewing distance. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
The authors studied the influence of canonical orientation on visual search for object orientation. Displays consisted of pictures of animals whose axis of elongation was either vertical or tilted in their canonical orientation. Target orientation could be either congruent or incongruent with the object's canonical orientation. In Experiment 1, vertical canonical targets were detected faster when they were tilted (incongruent) than when they were vertical (congruent). This search asymmetry was reversed for tilted canonical targets. The effect of canonical orientation was partially preserved when objects were high-pass filtered, but it was eliminated when they were low-pass filtered, rendering them as unfamiliar shapes (Experiment 2). The effect of canonical orientation was also eliminated by inverting the objects (Experiment 3) and in a patient with visual agnosia (Experiment 4). These results indicate that orientation search with familiar objects can be modulated by canonical orientation, and they indicate a top-down influence on orientation processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
How do children learn associations between novel words and complex perceptual displays? Using a visual preference procedure, the authors tested 12- and 19-month-olds to see whether the infants would associate a novel word with a complex 2-part object or with either of that object's parts, both of which were potentially objects in their own right and 1 of which was highly salient to infants. At both ages, children's visual fixation times during test were greater to the entire complex object than to the salient part (Experiment 1) or to the less salient part (Experiment 2)--when the original label was requested. Looking times to the objects were equal if a new label was requested or if neutral audio was used during training (Experiment 3). Thus, from 12 months of age, infants associate words with whole objects, even those that could potentially be construed as 2 separate objects and even if 1 of the parts is salient. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Explored the 6-mo-old's ability to recognize an object in a new orientation after being familiarized with the object while it was moving. In Exp I with a total of 58 Ss, there were 4 experimental conditions in which the object moved in different ways during familiarization and a control in which movement was minimal. The Ss in 3 of the movement conditions showed significant differentiation between the novel and familiar objects, whereas Ss in the control group did not, suggesting that movement does facilitate recognition. In the condition in which the infants could observe continuous transformations from one orientation to the next, there was no significant differentiation; the data suggest that the apparent difficulty in this case was due, in general, to the complexity of the movement and, in particular, to rotation. Translatory movement seemed to be the most effective in helping the Ss learn to recognize the object regardless of its orientation. Exp II, with 24 Ss, confirmed that 6-mo-olds learn or detect an object's structure faster during translation than during rotation. The role of optical change in the detection of an object's invariant structure is discussed. (9 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Information about the visual angle size of objects is important for maintaining object constancy with variations in viewing distance. Although human observers are quite accurate at judging spatial separations (or cross-sectional size), they are prone to error when there are other spans nearby, as in classical illusions such as the Müller-Lyer illusion. It is possible to reconcile these aspects of size perception by assuming that the size domain is sampled sparsely. It was shown by means of a visual search procedure that the size of objects is processed preattentively and in parallel across the visual field. It was demonstrated that an object's size, rather than its boundary curvature or spatial-frequency content, provides the basis for parallel visual search. It was also shown that texture borders could be substituted for luminance borders, indicating that object boundaries at the relevant spatial scale provide the input to size perception. Parallel processing imposes a severe computational constraint which provides support for the assumption of sparse sampling. An economical model based on several broadly tuned layers of size detectors is proposed to account for the parallel extraction of size, the Weberian behaviour of size discrimination, and the occurrence of strong interference effects in the size domain.  相似文献   

12.
The present study investigated object-based feature encoding in visual short-term memory for 2 features within the same dimension that occur on different parts of an object. Using the change-detection paradigm, this experiment studied objects with 2 colors and objects with 2 orientations. Participants found it easier to monitor 1 rather than both features of such objects, even when decision noise was properly controlled for. However, no object-based benefit was observed for encoding the 2 features of each object that were of the same dimension. When similar stimuli were used but the 2 features of each object were from different dimensions (color and orientation), an object-based benefit was observed. These results thus impose a major constraint on object-based feature encoding theories by showing that only features from different dimensions can benefit from object-based encoding. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
This study examined whether the perception of heading is determined by spatially pooling velocity information. Observers were presented displays simulating observer motion through a volume of 3-D objects. To test the importance of spatial pooling, the authors systematically varied the nonrigidity of the flow field using two types of object motion: adding a unique rotation or translation to each object. Calculations of the signal-to-noise (observer velocity-to-object motion) ratio indicated no decrements in performance when the ratio was .39 for object rotation and .45 for object translation. Performance also increased with the number of objects in the scene. These results suggest that heading is determined by mechanisms that use spatial pooling over large regions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
This research concerned the use of mental rotation in recognizing rotated objects. Instead of the classic Shepard's paradigm in which subjects were still while observing rotated objects, here subjects had to move (or imagine moving) around stationary three-dimensional objects put in the middle of the trajectory. Thus, depending on the viewing positions, such objects were seen under six different perspectives (from 30 degrees to 180 degrees). The latter task has been thought to be closer to everyday life in which we obtain information regarding objects from their spatial properties. The results do not follow the classic rules of mental rotation of an object predicting a linear increase of the time needed to recognize distorted objects as a function of their angular displacement. They also differ from data in the literature about spatial imagery showing that access to spatial information is facilitated more when people actually move through a path than when they imagine moving. A probable explanation of this difference from the literature is discussed in relation to the particular involvement of the body in the experimental task.  相似文献   

15.
In an earlier report (K. L. Harman, G. K. Humphrey, and M. A. Goodale, 1999), the authors demonstrated that Os who actively rotated 3-dimensional (3-D) novel objects on a computer screen later showed faster visual recognition of these objects than did Os who had passively viewed exactly the same sequence of images of these virtual objects. In Exp 1 of the present study, using 24 18–30 yr olds, the authors show that compared to passive viewing, active exploration of 3-D object structure led to faster performance on a "mental rotation" task involving the studied objects. They also examined how much time Os concentrated on particular views during active exploration. As found in the previous report, Os spent most of their time looking at the "side" and "front" views ("plan" views) of the objects, rather than the 3-quarter or intermediate views. This preference for the plan views of an object led to the examination of the possibility in Exp 2 that restricting the studied views in active exploration to either the plan views or the intermediate views would result in differential learning. 24 18–28 yr olds were used in Exp 2. It was found that recognition of objects was faster after active exploration limited to plan views than after active exploration of intermediate views. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Visual objects are perceived correctly only if their features are identified and then bound together. Illusory conjunctions result when feature identification is correct but an error occurs during feature binding. A new model is proposed that assumes feature binding errors occur because of uncertainty about the location of visual features. This model accounted for data from 2 new experiments better than a model derived from A. M. Treisman and H. Schmidt's (see record 1982-07512-001) feature integration theory. The traditional method for detecting the occurrence of true illusory conjunctions is shown to be fundamentally flawed. A reexamination of 2 previous studies provided new insights into the role of attention and location information in object perception and a reinterpretation of the deficits in patients who exhibit attentional disorders. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Blocking GABAA-receptor-mediated inhibition reduces the selectivity of striate cortical neurons for the orientation of a light bar primarily by reducing the selectivity of their onset transient (initial 200 ms) response. Blocking GABAB-receptor-mediated inhibition with phaclofen, however, is not reported to reduce the orientation selectivity of these neurons when it is measured with a light bar. We hypothesized that blocking GABAB-receptor-mediated inhibition would instead affect the orientation selectivity of cortical neurons by reducing the selectivity of their sustained response to a prolonged stimulus. To test this hypothesis, we stimulated 21 striate cortical neurons with drifting sine-wave gratings and measured their orientation selectivity before, during, and after iontophoretic injection of 2-hydroxy-saclofen (2-OH-S), a selective GABAB-receptor antagonist. 2-OH-S reduced the orientation selectivity of six of eight simple cells by an average of 28.8 (+/- 13.2) % and reduced the orientation selectivity of eight of 13 complex cells by an average of 32.3 (+/- 27.4) %. As predicted, 2-OH-S reduced the orientation selectivity of the neurons' sustained response, but did not reduce the orientation selectivity of their onset transient response. 2-OH-S also increased the length of spike "bursts" (two or more spikes with interspike intervals < or = 8 ms) and eliminated the orientation selectivity of these bursts for six cells. These results are the first demonstration of a functional role for GABAB receptors in visual cortex and support the hypothesis that two GABA-mediated inhibitory mechanisms, one fast and the other slow, operate within the striate cortex to shape the response properties of individual neurons.  相似文献   

18.
Examined how novel, 3-dimensional shapes are represented in long-term memory and how this might be differentially affected by monocular and binocular viewing in 3 experiments with a total of 141 undergraduates. Exp 1 established that slide projections of the novel objects could be recognized readily if seen in the same orientation as seen during learning. Exps 2 and 3 examined generalization to novel depth rotations of the objects. Results are consistent with a growing body of recent research showing that, at least under certain conditions, the visual system stores viewpoint-specific representations of objects. (French abstract) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Judgments of the spatial layout of a three-dimensional array of pictured dowels remain relatively constant as viewing angle changes, whereas judgments of their orientation relative to the observer (perceived orientation) vary. These changes in perceived orientation as viewing angle changes, called the differential rotation effect (DRE), also occur for stimuli such as the eyes in portraits, which are not extended in pictorial space. Thus, the mechanism for the DRE does not depend on the extension of pictured objects in depth. The DRE is decreased when back-illuminated pictures are viewed in the dark so that the picture plane is not visible. This result suggests that the DRE depends on information that defines a pictured object's direction relative to the picture plane. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号