首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
A divided attention paradigm was used to investigate whether graphemes and phonemes can mutually activate or inhibit each other during bimodal processing. In 3 experiments, Dutch Ss reacted to visual and auditory targets in single-channel or bimodal stimuli. In some bimodal conditions, the visual and auditory targets were nominally identical or redundant (e.g., visual A and auditory /a/); in others they were not (e.g., visual U and auditory /a/). Temporal aspects of cross-modal activation were examined by varying the stimulus onset asynchrony of visual and auditory stimuli. Cross-modal facilitation, but not inhibition, occurred rapidly and automatically between phoneme and grapheme representations. Implications for current models of bimodal processing and word recognition are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Previous studies have shown that the right hemisphere processes the visual details of objects and the emotionality of information. These two roles of the right hemisphere have not been examined concurrently. In the present study, the authors examined whether right hemisphere processing would lead to particularly good memory for the visual details of emotional stimuli. Participants viewed positive, negative, and neutral objects, displayed to the left or right of a fixation cross. Later, participants performed a recognition task in which they evaluated whether items were "same" (same visual details), "similar" (same verbal label, different visual details), or "new" (unrelated) in comparison with the studied objects. Participants remembered the visual details of negative items well, and this advantage in memory specificity was particularly pronounced when the items had been presented directly to the right hemisphere (i.e., to the left of the fixation cross). These results suggest that there is an episodic memory benefit conveyed when negative items are presented directly to the right hemisphere, likely because of the specialization of the right hemisphere for processing both visual detail and negatively valenced emotional information. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's “woofing”) and spoken words (e.g., /d?g/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

4.
Under two conditions, 32 English-speaking and 32 Chinese-speaking adults solved single-digit multiplication problems. In one condition, problems were presented as visual digits (e.g., 8×9). In the other condition, problems were presented as auditory number words in the participant's first language (e.g., /eit/ /taimz/ /nain/). Chinese-speaking adults made proportionately more operand-intrusion errors (e.g., 4×8=24) than English-speaking adults. Both groups made more operand-intrusion errors with auditory than with visual presentation. These findings are similar to those found when participants solve problems presented as visual number words (e.g., eight×nine), suggesting that in both cases the activation of phonological codes interferes with processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Effects of depth of encoding on form-specific memory were examined. After viewing words (e.g., "bear") presented centrally during initial encoding, participants completed word stems (e.g., "BEA") presented laterally and pattern masked during subsequent test. When the encoding task was perceptual, letter-case specific memory was not observed, unlike in previous experiments without pattern masking. However, when the encoding task required both perceptual and conceptual processing, letter-case specific memory was observed in direct right-hemisphere, but not in direct left-hemisphere, test presentations, like in previous studies without pattern masking. Results were not influenced by whether stems were completed to form the first words that came to mind or words explicitly retrieved from encoding. Depth of encoding may influence form-specific memory through interactive processing of visual and postvisual information. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
The visual system continually selects some information for processing while bypassing the processing of other information, and as a consequence, participants often fail to notice large changes to visual stimuli. In the present studies, the authors investigated whether knowledge about the probability of particular changes occurring over time increased the likelihood that changes that were likely to occur in the real world (probable changes) would be detected. The results of two experiments showed that participants were more likely to detect probable changes. This occurred whether or not they were processing the scene in a meaningful manner or actively searching the scene for changes. Furthermore, participants were unable to accurately predict change detection performance for probable and improbable changes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Affective priming studies have shown that participants are faster to pronounce affectively polarized target words that are preceded by affectively congruent prime words than affectively polarized target words that are preceded by affectively incongruent prime words. We examined whether affective priming of naming responses depends on the valence proportion (i.e., the proportion of stimuli that are affectively polarized). In one group of participants, experimental trials were embedded in a context of filler trials that consisted of affectively polarized stimulus materials (i.e., high valence proportion condition). In a second group, the same set of experimental trials was embedded in a context of filler trials consisting of neutral stimuli (i.e., low valence proportion condition). Results showed that affective priming of naming responses was significantly stronger in the high valence proportion condition than in the low valence proportion condition. We conclude that (a) subtle aspects of the procedure can influence affective priming of naming responses, (b) finding affective priming of naming responses does not allow for the conclusion that affective stimulus processing is unconditional, and (c) affective stimulus processing depends on selective attention for affective stimulus information. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

8.
Repetition priming is often thought to reflect the facilitation of 1 or more processes engaged during initial and subsequent presentations of a stimulus. Priming can also reflect the formation of direct, stimulus–response (S-R) bindings, retrieval of which bypasses many of the processes engaged during the initial presentation. Using long-lag repetition priming of semantic classification of visual stimuli, the authors used task switches between study and test phases to reveal several signatures of S-R learning in Experiments 1 through 5. Indeed, the authors found surprisingly little, if any, evidence of priming that could not be attributed to S-R learning, once they considered the possibility that stimuli are simultaneously bound to multiple, different response codes. Experiments 6 and 7 provided more direct evidence for independent contributions from at least 3 levels of response representation: the action (e.g., specific finger used), the decision (e.g., yes–no), and the task-specific classification (e.g., bigger–smaller). Although S-R learning has been discussed previously in many contexts, the present results go beyond existing theories of S-R learning. Moreover, its dominant role brings into question many interpretations of priming during speeded classification tasks in terms of perceptual–conceptual processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Previous developmental studies have indicated that boys tend to perform better than girls on tasks associated with the right hemisphere (e.g., spatial tasks), whereas girls perform better on tasks associated with the left hemisphere (e.g., verbal tasks). Extending this body of literature to what is known about hemispheric specialization of visuospatial processing, we predicted that boys would be more global than girls in their perception of visual hierarchical stimuli. Forty girls and 39 boys between the ages of 4 and 12 years were administered a perceptual judgment task previously used by Kimchi and Palmer (see record 1983-02534-001). Boys were significantly more global in their perceptual judgments than girls at all ages. Younger children of both sexes were less global than older children. Results were consistent with developmental models that suggest an early left-hemisphere advantage for girls and a right-hemisphere advantage for boys. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
A widespread theoretical assumption is that many processes involved in text comprehension are automatic, with automaticity typically defined in terms of properties (e.g., speed, effort). In contrast, the authors advocate for conceptualization of automaticity in terms of underlying cognitive mechanisms and evaluate one prominent account, the memory-based processing account, which states that one mechanism underlying automatization involves a shift from algorithm-based interpretation of stimuli to retrieval of prior interpretations of those stimuli. During practice, participants repeatedly read short stories containing novel conceptual combinations that were disambiguated with either their dominant or subordinate meaning. During transfer, the combinations were embedded in new sentences that either preserved or changed the disambiguated meaning. The primary dependent variable was reading time in the disambiguating region of target sentences. Supporting the memory-based processing account, speed-ups with practice were larger for repeated versus unrepeated items of the same type, reading times for subordinate versus dominant meanings of the combinations converged on later trials, and practiced meanings were retrieved when items appeared in a transfer context. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
The role of attention in speeded Garner classification of concurrently presented auditory and visual signals was examined in 4 experiments. Within-trial interference (i.e., congruence effects) occurred regardless of the attentional demands of the task. Between-trials interference (i.e., Garner interference) occurred only under conditions of divided attention when making judgments about auditory signals. Of importance, the data show congruence effects in the absence of Garner interference. Such a pattern has been rarely reported in studies of the classification of purely visual stimuli and contradicts theoretical accounts asserting that the effects share a common locus. The data question the notion that Garner classification reveals fundamental insights about the nature of the perceptual processing of bimodal stimuli. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Previous work has consistently reported a facilitatory influence of positive emotion in face recognition (e.g., D’Argembeau, Van der Linden, Comblain, & Etienne, 2003). However, these reports asked participants to make recognition judgments in response to faces, and it is unknown whether emotional valence may influence other stages of processing, such as at the level of semantics. Furthermore, other evidence suggests that negative rather than positive emotion facilitates higher level judgments when processing nonfacial stimuli (e.g., Mickley & Kensinger, 2008), and it is possible that negative emotion also influences latter stages of face processing. The present study addressed this issue, examining the influence of emotional valence while participants made semantic judgments in response to a set of famous faces. Eye movements were monitored while participants performed this task, and analyses revealed a reduction in information extraction for the faces of liked and disliked celebrities compared with those of emotionally neutral celebrities. Thus, in contrast to work using familiarity judgments, both positive and negative emotion facilitated processing in this semantic-based task. This pattern of findings is discussed in relation to current models of face processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
In object substitution masking (OSM) a sparse, temporally trailing 4-dot mask impairs target identification, even though it has different contours from, and does not spatially overlap with the target. Here, we demonstrate a previously unknown characteristic of OSM: Observers show reduced masking at prolonged (e.g., 640 ms) relative to intermediate mask durations (e.g., 240 ms). We propose that with prolonged exposure, the mask's visual representation is consolidated, which allows processing of the lingering target icon to be reinitiated, thereby improving performance. Our findings suggest that when the visual system is confronted with 2 temporally contiguous stimuli, although one may initially gain access to consciousness above the other, the “losing” stimulus is not irreversibly lost to awareness. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

14.
The iambic–trochaic law has been proposed to account for the grouping of auditory stimuli: Sequences of sounds that differ only in duration are grouped as iambs (i.e., the most prominent element marks the end of a sequence of sounds), and sequences that differ only in pitch or intensity are grouped as trochees (i.e., the most prominent element marks the beginning of a sequence). In 3 experiments, comprising a familiarization and a test phase, we investigated whether a similar grouping principle is also present in the visual modality. During familiarization, sequences of visual stimuli were repeatedly presented to participants, who were asked to memorize their order of presentation. In the test phase, participants were better at remembering fragments of the familiarization sequences that were consistent with the iambic–trochaic law. Thus, they were better at remembering fragments that had the element with longer duration in final position (iambs) and fragments that had the element with either higher temporal frequency or higher intensity in initial position (trochees), as compared with fragments that were inconsistent with the iambic–trochaic law or that never occurred during familiarization. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

15.
Negative priming (NP) refers to the finding that people's responses to probe targets previously presented as prime distractors are usually slower than to unrepeated stimuli. Intriguingly, the effect sizes of tactile NP were much larger than the effect sizes for visual NP. We analyzed whether the large tactile NP effect is just a side effect of the higher difficulty when processing tactile compared to visual stimuli. Thus, we analyzed tactile NP in a sample of blind participants and in a control sample of sighted participants. Although the blind participants handled the tactile stimuli with ease, we found no evidence that the size of the tactile NP effect diminished. In two control experiments with sighted participants, we varied the processing difficulty in the visual and tactile modality and found that both modality and processing difficulty had an effect on the size of NP. Taken together, our data show that the difficulty associated with processing tactile stimuli is only partially the reason for the unusual large tactile NP effect. These results suggest that non-spatial tactile distractors are processed and selected quite differently from visual distractors. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

16.
Rapid response to danger holds an evolutionary advantage. In this positron emission tomography study, phobics were exposed to masked visual stimuli with timings that either allowed awareness or not of either phobic, fear-relevant (e.g., spiders to snake phobics), or neutral images. When the timing did not permit awareness, the amygdala responded to both phobic and fear-relevant stimuli. With time for more elaborate processing, phobic stimuli resulted in an addition of an affective processing network to the amygdala activity, whereas no activity was found in response to fear-relevant stimuli. Also, right prefrontal areas appeared deactivated, comparing aware phobic and fear-relevant conditions. Thus, a shift from top-down control to an affectively driven system optimized for speed was observed in phobic relative to fear-relevant aware processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Functional activation (measured with fMRI) in occipital cortex was more extensive when participants view pictures strongly related to primary motive states (i.e., victims of violent death, viewer-directed threat, and erotica). This functional activity was greater than that observed for less intense emotional (i.e., happy families or angry faces) or neutral images (i.e., household objects, neutral faces). Both the extent and strength of functional activity were related to the judged affective arousal of the different picture contents, and the same pattern of functional activation was present whether pictures were presented in color or in grayscale. It is suggested that more extensive visual system activation reflects "motivated attention," in which appetitive or defensive motivational engagement directs attention and facilitates perceptual processing of survival-relevant stimuli. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
When observers decide how to classify stimuli, they often employ 1 of 2 types of information: identity along one particular dimension or overall similarity. The present 6 experiments, with 439 undergraduates, examined interrelations among the factors which determine the use of these types of information. Ss' classifications of certain types of materials (e.g., size and brightness, length and density) revealed strong individual differences, were related to the S's response tempo and selective processing ability, and were influenced by task demands. Classifications of other materials (e.g., saturation and brightness) did not reveal individual differences, were not affected by response tempo and selective processing ability, and were unaffected by changes in task demands. The former, but not the latter, types of materials have also been found to be influenced by developmental differences. Findings are consistent with the idea that differences in response tempo and selective processing ability underlie observer differences (both individual and developmental) and that certain types of stimuli that are not susceptible to such influences set boundary conditions for observer differences. The results are discussed within an integral-to-separable model of processing. (33 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Two experiments used Müller-Lyer stimuli to test the predictions of the planning-control model (S. Glover, 2002) for aiming movements. In Experiment 1, participants aimed to stimuli that either remained the same or changed upon movement initiation. Experiment 2 was identical except that the duration of visual feedback for online control was manipulated. The authors found that the figures visible during movement planning and online control had additive effects on endpoint bias, even when participants had ample time to use visual feedback to modify their movements (Experiment 2). These findings are problematic not only for the planning-control model but also for A. D. Milner and M. A. Goodale's (1995) two visual system explanation of illusory bias. Although our results are consistent with the idea that a single representation is used for perception, movement planning, and online control (e.g., V. H. Franz, 2001), other work from our laboratory and elsewhere suggests that the manner in which space is coded depends on constraints associated with the specific task, such as the visual cues available to the performer. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

20.
Within-subjects procedures were used to assess the influence of stimulus comparison on perceptual learning in humans. In Experiment 1, participants received intermixed (A, A′, A, A′,…) or blocked (B, B,…, B′, B′,…) exposure to pairs of similar female faces. In a subsequent same/different discrimination task, participants were more accurate when the test involved A and A′ than when it involved B and B′ (or novel faces: C and C′). This perceptual learning effect was reduced by placing a visual distractor (*: either another face or a checkerboard) between successive presentations of the faces during the exposure stage (e.g., A – * – A′). The attenuation of the intermixed versus blocked difference was particularly marked when faces were used as the distractor. In Experiment 2, this reduction in perceptual learning was more marked when * was positioned between the pairs of intermixed faces (i.e., A – * – A′) than when it preceded and succeeded those faces (i.e., * – A – A′ – *). These results provide the first direct evidence that the opportunity to compare stimuli plays a causal role in supporting perceptual learning. They also support the specific view that perceptual learning reflects an interaction between a short-term habituation process, that ordinarily biases processing away from the frequently presented common elements and toward their less frequently presented unique elements, and a long-term representational process that reflects this bias. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号