首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Reports an error in "Context and spoken word recognition in a novel lexicon" by Kathleen Pirog Revill, Michael K. Tanenhaus and Richard N. Aslin (Journal of Experimental Psychology: Learning, Memory, and Cognition, 2008[Sep], Vol 34[5], 1207-1223). Figure 9 was inadvertently duplicated as Figure 10. Figure 9 in the original article was correct. The correct Figure 10 is provided. (The following abstract of the original article appeared in record 2008-11850-014.) Three eye movement studies with novel lexicons investigated the role of semantic context in spoken word recognition, contrasting 3 models: restrictive access, access-selection, and continuous integration. Actions directed at novel shapes caused changes in motion (e.g., looming, spinning) or state (e.g., color, texture). Across the experiments, novel names for the actions and the shapes varied in frequency, cohort density, and whether the cohorts referred to actions (Experiment 1) or shapes with action-congruent or action-incongruent affordances (Experiments 2 and 3). Experiment 1 demonstrated effects of frequency and cohort competition from both displayed and non-displayed competitors. In Experiment 2, a biasing context induced an increase in anticipatory eye movements to congruent referents and reduced the probability of looks to incongruent cohorts, without the delay predicted by access-selection models. In Experiment 3, context did not reduce competition from non-displayed incompatible neighbors as predicted by restrictive access models. The authors conclude that the results are most consistent with continuous integration models. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
The time course of spoken word recognition depends largely on the frequencies of a word and its competitors, or neighbors (similar-sounding words). However, variability in natural lexicons makes systematic analysis of frequency and neighbor similarity difficult. Artificial lexicons were used to achieve precise control over word frequency and phonological similarity. Eye tracking provided time course measures of lexical activation and competition (during spoken instructions to perform visually guided tasks) both during and after word learning, as a function of word frequency, neighbor type, and neighbor frequency. Apparent shifts from holistic to incremental competitor effects were observed in adults and neural network simulations, suggesting such shifts reflect general properties of learning rather than changes in the nature of lexical representations. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Variability in talker identity, one type of indexical variation, has demonstrable effects on the speed and accuracy of spoken word recognition. Furthermore, neuropsychological evidence suggests that indexical and linguistic information may be represented and processed differently in the 2 cerebral hemispheres, and is consistent with findings from the visual domain. For example, in visual word recognition, changes in font affect processing differently depending on which hemisphere initially processes the input. The present study examined whether hemispheric differences exist in spoken language as well. In 4 long-term repetition-priming experiments, the authors examined responses to stimuli that were primed by stimuli that matched or mismatched in talker identity. The results demonstrate that indexical variability can affect participants' perception of spoken words differently in the 2 hemispheres. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Four experiments used the psychological refractory period logic to examine whether integration of multiple sources of phonemic information has a decisional locus. All experiments made use of a dual-task paradigm in which participants made forced-choice color categorization (Task 1) and phoneme categorization (Task 2) decisions at varying stimulus onset asynchronies. In Experiment 1, Task 2 difficulty was manipulated using words containing matching or mismatching coarticulatory cues to the final consonant. The results showed that difficulty and onset asynchrony combined in an underadditive way, suggesting that the phonemic mismatch was resolved prior to a central decisional bottleneck. Similar results were found in Experiment 2 using nonwords. In Experiment 3, the manipulation of task difficulty involved lexical status, which once again revealed an underadditive pattern of response times. Finally, Experiment 4 compared this prebottleneck variable with a decisional variable: response key bias. The latter showed an additive pattern of responses. The experiments show that resolution of phonemic ambiguity can take advantage of cognitive slack time at short asynchronies, indicating that phonemic integration takes place at a relatively early stage of spoken word recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Cross-modal semantic priming and phoneme monitoring experiments investigated processing of word-final nonreleased stop consonants (e.g., kit may be pronounced /kIt/ or /kI/), which are common phonological variants in American English. Both voiced /d/ and voiceless /t/ segments were presented in release and no-release versions. A cross-modal semantic priming task (Experiment 1) showed comparable priming for /d/ and /t/ versions. A second set of stimuli ending in /s/ were presented as intact, missing /s/, or with a mismatching final segment and showed significant but reduced priming for the latter two conditions. Experiment 2 showed that phoneme monitoring reaction time for release and no-release words and onset mismatching stimuli (derived pseudowords) increased as acoustic-phonetic similarity to the intended word decreased. The results suggest that spoken word recognition does not require special mechanisms for processing no-release variants. Rather, the results can be accounted for by means of existing assumptions concerning probabilistic activation that is based on partial activation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
The authors report 3 dual-task experiments concerning the locus of frequency effects in word recognition. In all experiments, Task 1 entailed a simple perceptual choice and Task 2 involved lexical decision. In Experiment 1, an underadditive effect of word frequency arose for spoken words. Experiment 2 also showed underadditivity for visual lexical decision. It was concluded that word frequency exerts an influence prior to any dual-task bottleneck. A related finding in similar dual-task experiments is Task 2 response postponement at short stimulus onset asynchronies. This was explored in Experiment 3, and it was shown that response postponement was equivalent for both spoken and visual word recognition. These results imply that frequency-sensitive processes operate early and automatically. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Clustering coefficient—a measure derived from the new science of networks—refers to the proportion of phonological neighbors of a target word that are also neighbors of each other. Consider the words bat, hat, and can, all of which are neighbors of the word cat; the words bat and hat are also neighbors of each other. In a perceptual identification task, words with a low clustering coefficient (i.e., few neighbors are neighbors of each other) were more accurately identified than words with a high clustering coefficient (i.e., many neighbors are neighbors of each other). In a lexical decision task, words with a low clustering coefficient were responded to more quickly than words with a high clustering coefficient. These findings suggest that the structure of the lexicon (i.e., the similarity relationships among neighbors of the target word measured by clustering coefficient) influences lexical access in spoken word recognition. Simulations of the TRACE and Shortlist models of spoken word recognition failed to account for the present findings. A framework for a new model of spoken word recognition is proposed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Several mechanisms have been proposed to account for how listeners accommodate regular phonological variation in connected speech. Using a corpus analysis and 5 cross-modal priming experiments, the authors investigate phonological variant recognition for the American English word-final flap. The corpus analysis showed that the flap variant occurs relatively frequently compared with the citation form [t] variant and is only probabilistically constrained by prosodic and phonemic context. The experienced distribution of the flap production is reflected in lexical processing: 4 cross-modal priming experiments demonstrated that lexical activation is not influenced by contextual constraints (inappropriate phrase boundary or phonemic contexts). A 2nd finding was a smaller priming effect for the less frequent flap as compared with the more frequent [t] variant. The contrasts between these findings for the flap and other context conditioned variants are discussed in terms of their implications for models of phonological variation recognition and in terms of the role of language experience. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
A spoken language eye-tracking methodology was used to evaluate the effects of sentence context and proficiency on parallel language activation during spoken language comprehension. Nonnative speakers with varying proficiency levels viewed visual displays while listening to French sentences (e.g., Marie va décrire la poule [Marie will describe the chicken]). Displays depicted several objects including the final noun target (chicken) and an interlingual near-homophone (e.g., pool) whose name in English is phonologically similar to the French target (poule). Listeners’ eye movements reflected temporary consideration of the interlingual competitor when hearing the target noun, demonstrating cross-language lexical competition. However, competitor fixations were dramatically reduced when prior sentence information was incompatible with the competitor (e.g., Marie va nourrir… [Marie will feed…]). In contrast, interlingual competition from English did not vary according to participants’ rated proficiency in French, even though proficiency reliably predicted other aspects of processing behavior, suggesting higher proficiency in the active language does not provide a significant independent source of control over interlingual competition. The results provide new insights into the nature of parallel language activation in naturalistic sentential contexts. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
[Correction Notice: An erratum for this article was reported in Vol 32(2) of Journal of Experimental Psychology: Learning, Memory, and Cognition (see record 2007-16796-001). The note to Appendix B (Stimuli Used in Experiment 2) on p. 14 contained errors. The fourth sentence, "For example, for participants receiving List A, lock was the target, key was the semantically related object, deer was the target's control, and apple was the related objects control" should read as follows: "For example, for participants receiving List A, logs was the target, key was the semantic onset competitor, and apple was the competitor's control."] Two experiments explore the activation of semantic information during spoken word recognition. Experiment 1 shows that as the name of an object unfolds (e.g., lock), eye movements are drawn to pictorial representations of both the named object and semantically related objects (e.g., key). Experiment 2 shows that objects semantically related to an uttered word's onset competitors become active enough to draw visual attention (e.g., if the uttered word is logs, participants fixate on key because of partial activation of lock), despite that the onset competitor itself is not present in the visual display. Together, these experiments provide detailed information about the activation of semantic information associated with a spoken word and its phonological competitors and demonstrate that transient semantic activation is sufficient to impact visual attention. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
A major issue in the study of word perception concerns the nature (perceptual or nonperceptual) of sentence context effects. The authors compared effects of legal, word replacement, nonword replacement, and transposed contexts on target word performance using the Reicher-Wheeler task to suppress nonperceptual influences of contextual and lexical constraint. Experiment 1 showed superior target word performance for legal (e.g., "it began to flap/flop") over all other contexts and for transposed over word replacement and nonword replacement contexts. Experiment 2 replicated these findings with higher constraint contexts (e.g., "the cellar is dark/dank") and Experiment 3 showed that strong constraint contexts improved performance for congruent (e.g., "born to be wild") but not incongruent (e.g., mild) target words. These findings support the view that the very perception of words can be enhanced when words are presented in legal sentence contexts. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Two gating studies, a forced-choice identification study and 2 series of cross-modal repetition priming experiments, traced the time course of recognition of words with onset embeddings (captain) and short words in contexts that match (cap tucked) or mismatch (cap looking) with longer words. Results suggest that acoustic differences in embedded syllables assist the perceptual system in discriminating short words from the start of longer words. The ambiguity created by embedded words is therefore not as severe as predicted by models of spoken word recognition based on phonemic representations. These additional acoustic cues combine with post-offset information in identifying onset-embedded words in connected speech. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
When the sentence She ran her best time yet in the rice last week is displayed using rapid serial visual presentation, viewers sometimes misread rice as race (M. C. Potter, A. Moryadas,1. Abrams, & A. Noel, 1993). Seven experiments combined misreading and repetition blindness (RB) paradigms to determine whether misreading of a word because of biasing sentence context represents a genuine perceptual effect. In Experiments 1-4, misreading a word either caused or prevented RB for a downstream word, depending on whether orthographic similarity was increased or decreased. Additional experiments examined temporal parameters of misreading RB and tested the hypothesis that RB results from reconstructive memory processes. Results suggest that the effect of prior context occurs during perception. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Although it is relatively well established that access to orthographic codes in production tasks is possible via an autonomous link between meaning and spelling (e.g., Rapp, Benzing, & Caramazza, 1997), the relative contribution of phonology to orthographic access remains unclear. Two experiments demonstrated persistent repetition priming in spoken and written single-word responses, respectively. Two further experiments showed priming from spoken to written responses and vice versa, which is interpreted as reflecting a role of phonology in constraining orthographic access. A final experiment showed priming from spoken onto written responses even when participants engaged in articulatory suppression during writing. Overall, the results support the view that access to orthography codes is accomplished via both the autonomous link between meaning and spelling and an indirect route via phonology. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

16.
Recent research on bilingualism has shown that lexical access in visual word recognition by bilinguals is not selective with respect to language. In the present study, the authors investigated language-independent lexical access in bilinguals reading sentences, which constitutes a strong unilingual linguistic context. In the first experiment, Dutch-English bilinguals performing a 2nd language (L2) lexical decision task were faster to recognize identical and nonidentical cognate words (e.g., banaan-banana) presented in isolation than control words. A second experiment replicated this effect when the same set of cognates was presented as the final words of low-constraint sentences. In a third experiment that used eyetracking, the authors showed that early target reading time measures also yield cognate facilitation but only for identical cognates. These results suggest that a sentence context may influence, but does not nullify, cross-lingual lexical interactions during early visual word recognition by bilinguals. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Two experiments examined the nature of the phonological representations used during visual word recognition. We tested whether a minimality constraint (R. Frost, 1998) limits the complexity of early representations to a simple string of phonemes. Alternatively, readers might activate elaborated representations that include prosodic syllable information before lexical access. In a modified lexical decision task (Experiment 1), words were preceded by parafoveal previews that were congruent with a target's initial syllable as well as previews that contained 1 letter more or less than the initial syllable. Lexical decision times were faster in the syllable congruent conditions than in the incongruent conditions. In Experiment 2, we recorded brain electrical potentials (electroencephalograms) during single word reading in a masked priming paradigm. The event-related potential waveform elicited in the syllable congruent condition was more positive 250-350 ms posttarget compared with the waveform elicited in the syllable incongruent condition. In combination, these experiments demonstrate that readers process prosodic syllable information early in visual word recognition in English. They offer further evidence that skilled readers routinely activate elaborated, speechlike phonological representations during silent reading. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Most models predict that priming a word should retard recognition of another sharing its initial sounds. Available short lag priming data do not clearly support the prediction. The authors report 7 continuous lexical-decision experiments with 288 participants. With lags of 1–5 min between prime and probe, response time increased for a monosyllabic word preceded by a word sharing its onset and vowel (but not one sharing its rime) and for a polysyllabic word preceded by another sharing its first syllable. The effect was limited to words primed by words, suggesting that identifying the prime strengthens its lexical attractor, making identification of a lexical neighbor more difficult. With lags of only a few trials, facilitatory effects of phonological similarity or familiarity bias effects were also seen; this may explain why clear evidence for inhibitory priming has been lacking hitherto. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
This study examined the influence of letter-name instruction on beginning word recognition. Thirty-three preschool children from low-socioeconomic-status families participated in 16 weeks of letter-name or comprehension-focused instruction. After instruction, children's ability to learn 3 types of word spellings was examined: words phonetically spelled with letters children had been taught (e.g., BL for ball), words phonetically spelled with letters children had not been taught, and words with visually distinct letter spellings that were nonphonetic. Children who received letter-name instruction learned words phonetically spelled with letters included in instruction significantly better than other words. Children receiving comprehension instruction performed significantly better on visually distinct word spellings. Results demonstrate the beneficial effects of alphabet-letter instruction on beginning phonetic word recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
The present study investigated the mechanisms underlying perceptual compensation for assimilation in novel words. During training, participants learned canonical versions of novel spoken words (e.g., decibot) presented in isolation. Following exposure to a second set of novel words the next day, participants carried out a phoneme monitoring task. Here, the novel words were presented with final alternations (e.g., decibop) in carrier sentences that either licensed assimilation (viable context: Our decibop behaved badly) or did not (unviable context: Our decibop does very well). Listeners had to monitor for the underlying form of the assimilated consonant (e.g., /t/ in decibop). Results showed more responses corresponding to the underlying form in viable than in unviable contexts. This viability effect was equivalent for novel words learned on the same day and on the previous day but was absent for unexposed control items. The processing difference between exposed and control novel words supports the idea that compensation for assimilation interacts with newly acquired phonological information and suggests that contextual compensation for assimilation is enhanced by lexical knowledge. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号