首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
This study investigated whether and how visual representations of individual objects are bound in memory to scene context. Participants viewed a series of naturalistic scenes, and memory for the visual form of a target object in each scene was examined in a 2-alternative forced-choice test, with the distractor object either a different object token or the target object rotated in depth. In Experiments 1 and 2, object memory performance was more accurate when the test object alternatives were displayed within the original scene than when they were displayed in isolation, demonstrating object-to-scene binding. Experiment 3 tested the hypothesis that episodic scene representations are formed through the binding of object representations to scene locations. Consistent with this hypothesis, memory performance was more accurate when the test alternatives were displayed within the scene at the same position originally occupied by the target than when they were displayed at a different position. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
[Correction Notice: An erratum for this article was reported in Vol 32(2) of Journal of Experimental Psychology: Learning, Memory, and Cognition (see record 2007-16796-001). The note to Appendix B (Stimuli Used in Experiment 2) on p. 14 contained errors. The fourth sentence, "For example, for participants receiving List A, lock was the target, key was the semantically related object, deer was the target's control, and apple was the related objects control" should read as follows: "For example, for participants receiving List A, logs was the target, key was the semantic onset competitor, and apple was the competitor's control."] Two experiments explore the activation of semantic information during spoken word recognition. Experiment 1 shows that as the name of an object unfolds (e.g., lock), eye movements are drawn to pictorial representations of both the named object and semantically related objects (e.g., key). Experiment 2 shows that objects semantically related to an uttered word's onset competitors become active enough to draw visual attention (e.g., if the uttered word is logs, participants fixate on key because of partial activation of lock), despite that the onset competitor itself is not present in the visual display. Together, these experiments provide detailed information about the activation of semantic information associated with a spoken word and its phonological competitors and demonstrate that transient semantic activation is sufficient to impact visual attention. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's “woofing”) and spoken words (e.g., /d?g/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

4.
Previous research has suggested that pictures have privileged access to semantic memory (W. R. Glaser, 1992), but J. Theios and P. C. Amrhein (1989b) argued that prior studies inappropriately used large pictures and small words. In Experiment 1, participants categorized pictures reliably faster than words, even when both types of items were of optimal perceptual size. In Experiment 2, a poststimulus flashmask and judgments about internal features did not eliminate picture superiority, indicating that it was not due to differences in early visual processing or analysis of visible features. In Experiment 3, when participants made judgments about whether items were related, latencies were reliably faster for categorically related pictures than for words, but there was no picture advantage for noncategorically associated items. Results indicate that pictures have privileged access to semantic memory for categories, but that neither pictures nor words seem to have privileged access to noncategorical associations.  相似文献   

5.
Reports an error in "Eye Movements to Pictures Reveal Transient Semantic Activation During Spoken Word Recognition" by Eiling Yee and Julie C. Sedivy (Journal of Experimental Psychology: Learning, Memory, and Cognition, 2006[Jan], Vol 32[1], 1-14). The note to Appendix B (Stimuli Used in Experiment 2) on p. 14 contained errors. The fourth sentence, "For example, for participants receiving List A, lock was the target, key was the semantically related object, deer was the target's control, and apple was the related objects control" should read as follows: "For example, for participants receiving List A, logs was the target, key was the semantic onset competitor, and apple was the competitor's control." (The following abstract of the original article appeared in record 2006-01955-001.) Two experiments explore the activation of semantic information during spoken word recognition. Experiment 1 shows that as the name of an object unfolds (e.g., lock), eye movements are drawn to pictorial representations of both the named object and semantically related objects (e.g., key). Experiment 2 shows that objects semantically related to an uttered word's onset competitors become active enough to draw visual attention (e.g., if the uttered word is logs, participants fixate on key because of partial activation of lock), despite that the onset competitor itself is not present in the visual display. Together, these experiments provide detailed information about the activation of semantic information associated with a spoken word and its phonological competitors and demonstrate that transient semantic activation is sufficient to impact visual attention. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants generated an eye movement to the target object. In experiment 1, responses were slower when the spoken word referred to the distractor object than when it referred to the target object. In experiment 2, responses were slower when the spoken word referred to a distractor object than when it referred to an object not in the display. In experiment 3, the cue was a small shift in location of the target object and participants indicated the direction of the shift. Responses were slowest when the word referred to the distractor object, faster when the word did not have a referent, and fastest when the word referred to the target object. Taken together, the results demonstrate that referents of spoken words capture attention. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

7.
A number of recent studies have questioned the idea that lexical selection during speech production is a competitive process. One type of evidence against selection by competition is the observation that in the picture–word interference task semantically related distractors may facilitate the naming of a picture, whereas the selection by competition account predicts them to interfere. In the experiments reported in this article, the authors systematically varied, for a given type of semantic relation—that is, basic-level distractors (e.g., fish) during subordinate-level naming (e.g., carp)—the modality in which distractor words were presented (auditory vs. visual) and the proportion of response-congruent trials (i.e., trials allowing for the correct naming response to be derived from both the distractor and the target). With auditory distractors, semantic interference was obtained irrespective of the proportion of response-congruent trials (low in Experiment 1, high in Experiment 2). With visual distractors, no semantic effect was obtained with a low proportion of response-congruent trials (Experiment 3), whereas a semantic facilitation effect was obtained with a high proportion of response-congruent trials (Experiment 4). The authors propose that two processes contribute to semantic effects observed in the picture–word interference paradigm, namely selection by competition (leading to interference) and response congruency (leading to facilitation). Whether facilitation due to response congruency overrules the interference effect because of competition depends on the relative strength of these two processes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Most theories of semantic memory characterize knowledge of a given object as comprising a set of semantic features. But how does conceptual activation of these features proceed during object identification? We present the results of a pair of experiments that demonstrate that object recognition is a dynamically unfolding process in which function follows form. We used eye movements to explore whether activating one object's concept leads to the activation of others that share perceptual (shape) or abstract (function) features. Participants viewed 4-picture displays and clicked on the picture corresponding to a heard word. In critical trials, the conceptual representation of 1 of the objects in the display was similar in shape or function (i.e., its purpose) to the heard word. Importantly, this similarity was not apparent in the visual depictions (e.g., for the target Frisbee, the shape-related object was a triangular slice of pizza, a shape that a Frisbee cannot take); preferential fixations on the related object were therefore attributable to overlap of the conceptual representations on the relevant features. We observed relatedness effects for both shape and function, but shape effects occurred earlier than function effects. We discuss the implications of these findings for current accounts of the representation of semantic memory. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

9.
In 2 experiments participants named pictures of common objects with superimposed distractor words. In one naming condition, the pictures and words were presented simultaneously on every trial, and participants produced the target response immediately. In the other naming condition, the presentation of the picture preceded the presentation of the distractor by 1,000 ms, and participants delayed production of their naming response until distractor word presentation. Within each naming condition, the distractor words were either semantic category coordinates of the target pictures or unrelated. Orthogonal to this manipulation of semantic relatedness, the frequency of the pictures' names was manipulated. The authors observed semantic interference effects in both the immediate and delayed naming conditions but a frequency effect only in the immediate naming condition. These data indicate that semantic interference can be observed when target picture naming latencies do not reflect the bottleneck at the level of lexical selection. In the context of other findings from the picture-word interference paradigm, the authors interpret these data as supporting the view that the semantic interference effect arises at a postlexical level of processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
The authors investigated the impact of semantic knowledge on visual object analysis by assessing the performance of patients with semantic dementia on a different-views object matching test and on 2 object decision tests differing, for example, in whether the nonreal items were nonsense objects or chimeras of 2 real objects. On average, the patients scored normally on both the object matching and the object decision test including nonsense objects but were impaired on the object decision test including chimeras; this latter was also the only visual object test that correlated significantly with degree of semantic impairment. These findings demonstrate that object decision is not a single task or ability and that it is not necessarily independent of conceptual knowledge. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
When speakers produce words, lexical access proceeds through semantic and phonological levels of processing. If phonological processing begins based on partial semantic information, processing is cascaded; otherwise, it is discrete. In standard models of lexical access, semantically processed words exert phonological effects only if processing is cascaded. In 3 experiments, speakers named pictures of objects with homophone names (ball), while auditory distractor words were heard beginning 150 ms prior to picture onset. Distractors speeded picture naming (compared with controls) only when related to the nondepicted meaning of the picture (e.g., dance), exhibiting an early phonological effect, thereby supporting the cascaded prediction. Distractors slowed picture naming when categorically (e.g., frisbee) related to the depicted picture meaning, but not when associatively (e.g., game) related to it. An interactive activation model is presented. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
The effects of moving task-irrelevant objects on time-to-contact (TTC) judgments were examined in 5 experiments. Observers viewed a directly approaching target in the presence of a distractor object moving in parallel with the target. In Experiments 1 to 4, observers decided whether the target would have collided with them earlier or later than a standard (absolute identification task). A contrast effect was observed: If the distractor arrived later than the target, it caused a bias toward early responses, relative to the condition without a distractor. The early-arriving distractor had no significant effect. The pattern of results was unaltered when potentially confounding information from individual visual cues was removed. The availability of stereoscopic information reduced the effect. The contrast effect was also observed if target and distractor were abstract geometric objects rather than simulations of real-world vehicles, rendering less likely a simple safety strategy activated by a potentially threatening distractor. Experiment 5 showed that the effect of the late-arriving distractor generalized to a prediction-motion task. The results indicate that task-irrelevant information in the background has to be considered in revision of time-to-contact theory. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Skilled readers of Chinese participated in sorting and visual search experiments. The sorting results showed that under conditions of conflicting information about structure and component, subjective judgments of the visual similarity among characters were based on the characters' overall configurations (i.e., structures) rather than on the common components the characters possessed. In visual search, both structure and component contributed to the visual similarity reflected by the search efficiency. The steepest search slopes (thus the most similar target-distractor pairs) were found when the target and the distractor characters had the same structure and shared 1 common component, compared with when they had different structures and/or shared no common components. Results demonstrated that character structure plays a greater role in the visual similarity of Chinese characters than has been considered. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Whether attention to a local part of a visual display can prevent access to semantic information in form matching tasks with objects was studied. A first picture containing a line segment (the reference) was followed by 2 lateral objects also containing a line segment (a target and a distractor). Participants matched the line segments according to either their orientation or color. Effects of semantic information were assessed by manipulating the semantic relations among the pictures surrounding the reference, target, and distractor. Semantic information affected performance in the orientation matching task, but not in the color matching task. Results suggest the existence of separate selection mechanisms in vision. Selection of local colors for response purposes can be based on inhibition of the form pathway (eliminating semantic effects on matching). Selection within the form pathway can involve a bias toward global shape (the picture). Once attention is allocated to global shape, associated semantic representations are activated and semantic effects on matching emerge. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Two lexical-decision experiments investigated the effects of semantic priming and stimulus intensity when target location varied and was cued by an abrupt onset. In Experiment 1, the spatial cue was a good predictor of target location, and in Experiment 2 it was not. The results indicate that word recognition processes were postponed until spatial attention was focused on the target and that whether attention further affected word recognition depended on cue validity. The joint effects of cue validity and priming interacted when cue validity was high but were additive when cue validity was low. The joint effects of stimulus intensity and semantic priming also varied according to cue validity (i.e., interactive when high and additive when low). The results are discussed in terms of their implications for visual word recognition, the distinction between exogenous and endogenous spatial attention, and how attention is affected by visual word recognition processes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
The magnitude of priming resulting from perception of a briefly presented picture of an object in an earlier trial block, as assessed by naming reaction times (RTs), was independent of whether the primed object was presented at the same or a different size as when originally viewed. RTs and error rates for "same" responses for old–new shape judgments were much increased by a change in object size from initial presentation. The authors conjecture that this dissociation between the effects of size consistency on naming and old–new shape recognition may reflect the differential functioning of 2 independent systems subserving object memory: one for representing object shape and the other for representing its size, position, and orientation (metric attributes). Allowing for response selection, object naming RTs may provide a relatively pure measure of the functioning of the shape system. Both the shape and metric systems may affect the feelings of familiarity that govern old–new episodic shape judgments. A comparison of speeded naming and episodic recognition judgments may provide a behavioral, noninvasive technique for determining the neural loci of these 2 systems. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Three experiments investigated interhemispheric interactions in number comparison using the interhemispheric Stroop-like paradigm (E. Ratinckx, M. Brysbaert, & B. Reynvoet, 2001). In all experiments, a target was presented in 1 visual field simultaneously with a distractor in the other visual field. In Experiment 1, both target and distractor were of the same modality (Arabic digits), whereas in Experiment 2, target and distractor were of different modalities (Arabic digits and word numerals). In Experiment 3, the interhemispheric Stroop-like task of Experiment 1 was combined with intrahemispheric conditions to evaluate the strength of the interhemispheric interactions. Overall, the results point to strong interhemispheric integration during semantic access and response preparation with very weak lateralization of the semantic number system. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
[Correction Notice: An erratum for this article was reported in Vol 17(2) of Journal of Experimental Psychology: Applied (see record 2011-11863-002). The copyright for the article was incorrectly listed. The copyright is in the correction.] Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e., digital aeronautical charts) to examine limits of attention in visual search. Undergraduates at a large southern university searched for a target among 4, 8, or 16 distractors in charts with high, medium, or low global clutter. The target was in a high or low local-clutter region of the chart. In Experiment 1, reaction time increased as global clutter increased, particularly when the target was in a high local-clutter region. However, there was no effect of distractor set size, supporting the notion that global clutter is a better measure of attention against competition in complex visual search tasks. As a control, Experiment 2 demonstrated that increasing the number of distractors leads to a typical set size effect when there is no additional clutter (i.e., no chart). In Experiment 3, the effects of global and local clutter were minimized when the target was highly salient. When the target was nonsalient, more fixations were observed in high global clutter charts, indicating that the number of elements competing with the target for attention was also high. The results suggest design techniques that could improve pilots' search performance in aeronautical charts. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

19.
We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be re-presented in the display, where it surrounded either the target (on valid trials) or a distractor (on invalid trials). Perceptual identification of the target, as indexed by A′, was enhanced on valid relative to invalid trials but only when the cue was kept in WM. There was minimal effect of the cue when it was merely attended and not kept in WM. Verbal cues were as effective as visual cues at modulating perceptual identification, and the effects were independent of the effects of target saliency. Matches to the contents of WM influenced perceptual sensitivity even under conditions that minimized competition for selecting the target. WM cues were also effective when targets were less likely to fall in a repeated WM stimulus than in other stimuli in the search display. There were no effects of WM on decisional criteria, in contrast to sensitivity. The findings suggest that reentrant feedback from WM can affect early stages of perceptual processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Since M. M. Chun and Y. Jiang's (1998) original study, a large body of research based on the contextual cuing paradigm has shown that the visuocognitive system is capable of capturing certain regularities in the environment in an implicit way. The present study investigated whether regularities based on the semantic category membership of the context can be learned implicitly and whether that learning depends on attention. The contextual cuing paradigm was used with lexical displays in which the semantic category of the contextual words either did or did not predict the target location. Experiments 1 and 2 revealed that implicit contextual cuing effects can be extended to semantic category regularities. Experiments 3 and 4 indicated an implicit contextual cuing effect when the predictive context appeared in an attended color but not when the predictive context appeared in an ignored color. However, when the previously ignored context suddenly became attended, it immediately facilitated performance. In contrast, when the previously attended context suddenly became ignored, no benefit was observed. Results suggest that the expression of implicit semantic knowledge depends on attention but that latent learning can nevertheless take place outside the attentional field. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号