首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
Two experiments using a variation of the clue word analogy task (Goswami, 1986) explored whether children can make orthographic analogies when given multiple clue words, beyond the known effects of purely phonological activation. In Experiment 1, 42 children (mean age 6 years and 8 months) were first taught 3 “clue” words (e.g., fail, mail, jail) and then shown target words sharing orthographic and phonological rimes (e.g., hail), phonological rimes (e.g., veil), orthographic and phonological vowel digraphs (e.g., wait), phonological vowel digraphs (e.g., vein), or unrelated controls (e.g., bard). All word types were advantaged at posttest over unrelated controls. A small additional advantage for orthographic and phonological rimes over phonological rimes was evident in by-participant analysis. Finally, regression analysis showed a specific relationship between onset-rime phonological awareness and orthographic rime clue word task transfer. Experiment 2 replicated Experiment 1 with 30 children (M age = 7 years, 0 months) and added a distinct group of children taught multiple clue words sharing vowel digraphs (e.g., gait, maim, maid). Results showed advantages for all words over unrelated controls and a small additional advantage for orthographic and phonological vowel digraphs over phonological vowel digraphs in the by-participant analysis. Overall, results suggest that some young children do have the ability to make orthographic analogies when given multiple exemplars but that most improvement in target word reading reflects purely phonological activation. Practical steps for identifying genuine analogy use in a subset of children are thus described. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

2.
[Correction Notice: An erratum for this article was reported in Vol 32(2) of Journal of Experimental Psychology: Learning, Memory, and Cognition (see record 2007-16796-001). The note to Appendix B (Stimuli Used in Experiment 2) on p. 14 contained errors. The fourth sentence, "For example, for participants receiving List A, lock was the target, key was the semantically related object, deer was the target's control, and apple was the related objects control" should read as follows: "For example, for participants receiving List A, logs was the target, key was the semantic onset competitor, and apple was the competitor's control."] Two experiments explore the activation of semantic information during spoken word recognition. Experiment 1 shows that as the name of an object unfolds (e.g., lock), eye movements are drawn to pictorial representations of both the named object and semantically related objects (e.g., key). Experiment 2 shows that objects semantically related to an uttered word's onset competitors become active enough to draw visual attention (e.g., if the uttered word is logs, participants fixate on key because of partial activation of lock), despite that the onset competitor itself is not present in the visual display. Together, these experiments provide detailed information about the activation of semantic information associated with a spoken word and its phonological competitors and demonstrate that transient semantic activation is sufficient to impact visual attention. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Reports an error in "Eye Movements to Pictures Reveal Transient Semantic Activation During Spoken Word Recognition" by Eiling Yee and Julie C. Sedivy (Journal of Experimental Psychology: Learning, Memory, and Cognition, 2006[Jan], Vol 32[1], 1-14). The note to Appendix B (Stimuli Used in Experiment 2) on p. 14 contained errors. The fourth sentence, "For example, for participants receiving List A, lock was the target, key was the semantically related object, deer was the target's control, and apple was the related objects control" should read as follows: "For example, for participants receiving List A, logs was the target, key was the semantic onset competitor, and apple was the competitor's control." (The following abstract of the original article appeared in record 2006-01955-001.) Two experiments explore the activation of semantic information during spoken word recognition. Experiment 1 shows that as the name of an object unfolds (e.g., lock), eye movements are drawn to pictorial representations of both the named object and semantically related objects (e.g., key). Experiment 2 shows that objects semantically related to an uttered word's onset competitors become active enough to draw visual attention (e.g., if the uttered word is logs, participants fixate on key because of partial activation of lock), despite that the onset competitor itself is not present in the visual display. Together, these experiments provide detailed information about the activation of semantic information associated with a spoken word and its phonological competitors and demonstrate that transient semantic activation is sufficient to impact visual attention. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Two experiments use rhyme priming techniques to explore the decision space for lexical access. The 1st experiment, using intramodal (auditory-auditory) priming, covaried the phonological distance of a spoken rhyme prime (e.g., pomato) from its source word (e.g., tomato) with the presence or absence of close lexical competitors. The results showed strong effects of phonological distance and no significant effects of competitor environment. The 2nd experiment, using ambiguous rhyme primes in a cross-modal (auditory-visual) priming task, showed that phonetically ambiguous primes could fully match their source words, but only in the appropriate lexical environment. The results support a view of lexical access in which the listener's perceptual experience is based on strict requirements for a bottom-up match with the speech input, and in which competitor environment does not directly modulate the on-line goodness-of-fit computation.  相似文献   

5.
The effectiveness of nonword orthographic rime primes as a function of the regularity (as defined by grapheme-phoneme correspondence [GPC] rules) of typical pronunciation was examined in this research. In Experiments 1 and 2, predictions from GPC and orthographic rime unit accounts converged, but in Experiments 3 and 4 they diverged. Experiment 1 showed that when nonword orthographic rimes were used to prime consistent regular words (e.g., mist) and atypically irregular words (e.g., pint), reliable priming was observed for regular words, but priming of atypically irregular words occurred only in the 2nd block of trials, after the orthographic rime prime itself had been primed by the Block 1 presentation of the target word. In subsequent experiments, only the 1st block of trials was examined. Experiment 2 replicated selective priming of consistent regular words observed in Block 1 of Experiment 1. In Experiment 3, nonword orthographic rimes were as effective at priming typically irregular target words (e.g., grind) as they were in priming inconsistent but typically regular target words (e.g., flint)… (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Pronunciation performance under speeded conditions was examined for various kinds of letter strings, including pseudohomophones (e.g., TRAX), their real word counterparts (e.g., TRACKS), and a set of nonword controls (e.g., PRAX). Experiment 1 yielded a pronunciation advantage for the pseudohomophones relative to the controls, which was largest among items having few or no orthographic neighbors. Experiment 2 ruled out an account of the pseudohomophone advantage based on differences between pseudohomophones and controls in initial phonemes. Experiment 3 established the existence of a large frequency effect on pronunciation of the base words themselves. These results suggest that whole word representations in the phonological output lexicon are consulted in the course of assembling a pronunciation and that representations in a phonological output lexicon are insensitive to word frequency. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
The role of semantic, orthographic, and phonological codes in word recognition and text integration in reading was investigated in 4 experiments. Participants read sentences containing words that had multiple semantic codes (e.g., calf), multiple semantic and orthographic codes (e.g., brake-break), or multiple semantic and phonological codes (e.g., tear). Converging evidence from fixation time, naming time, and oral reading indicated that phonological, semantic, and orthographic information about words are sources of early constraint in word processing. Evidence was also found that indicated that phonological codes play an important role in text integration in reading. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Two main theories of visual word recognition have been developed regarding the way orthographic units in printed words map onto phonological units in spoken words. One theory suggests that a string of single letters or letter clusters corresponds to a string of phonemes (Coltheart, 1978; Venezky, 1970), while the other suggests that a string of single letters or letter clusters corresponds to coarser phonological units, for example, onsets and rimes (Treiman & Chafetz, 1987). These theoretical assumptions were critical for the development of coding schemes in prominent computational models of word recognition and reading aloud. In a reading-aloud study, we tested whether the human reading system represents the orthographic/phonological onset of printed words and nonwords as single units or as separate letters/phonemes. Our results, which favored a letter and not an onset-coding scheme, were successfully simulated by the dual-route cascaded (DRC) model (Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001). A separate experiment was carried out to further adjudicate between 2 versions of the DRC model. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
The role of assembled phonology in visual word recognition was investigated using a task in which participants judged whether 2 words (e.g., PILLOW–BEAD) were semantically related. Of primary interest was whether it would be more difficult to respond "no" to "false homophones" (e.g., BEAD) of words (BED) that are semantically related to target words than to orthographic controls (BEND). (BEAD is a false homophone of BED because –EAD can be pronounced /εd/.) In Experiment 1, there was an interference effect in the response time data, but not in the error data. These results were replicated in a 2nd experiment in which a parafoveal preview was provided for the 2nd word of the pair. A 3rd experiment ruled out explanations of the false homophone effect in terms of inconsistency in spelling-to-sound mappings or inadequate spelling knowledge. It is argued that assembled phonological representations activate meaning in visual word recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
In a 2-wave study of a cohort of 108 Chinese students (10- to 11-year-olds) learning English as a second language, the authors examined the relative effects of three Time 1 latent constructs-- orthographic knowledge, phonological sensitivity, and word identification (reading and spelling of regular and exception words)--on the respective Time 2 performance. The authors posited autoregressive effects, in which Time 1 constructs affected their matching Time 2 performance (e.g., Time 1 orthographic knowledge on Time 2 orthographic knowledge), as well as reciprocal cross-domain effects (e.g., orthographic knowledge on word identification and vice versa). The model converged to a proper solution with reasonably good fit. The results suggest (a) strong stability in the children's word identification and phonological sensitivity, (b) substantial effects of word identification on subsequent orthographic knowledge and phonological sensitivity, particularly the former; and (c) greater variations in individuals' growth of orthographic knowledge. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
To test the effect of the frequency of orthographic "neighbors" on the identification of a printed word, two sets of words were constructed (equated on the number of neighbors, word frequency, and number of letters); in one set, the words had no higher frequency neighbors and in the other set, they had at least one higher frequency neighbor. Identification was slower for the latter set. In Experiment 1, this was indexed by longer response times in a lexical decision task. In Experiment 2, the target words were embedded in sentences, and slower identification was indexed by disruptions in reading: more regressions back to the words with higher frequency neighbors and longer fixations on the text immediately following these words. The latter results indicate that a higher frequency neighbor affects relatively late stages of lexical access, an interpretation consistent with both activation-verification and interactive activation models.  相似文献   

12.
Four visual-world experiments, in which listeners heard spoken words and saw printed words, compared an optimal-perception account with the theory of phonological underspecification. This theory argues that default phonological features are not specified in the mental lexicon, leading to asymmetric lexical matching: Mismatching input (pin) activates lexical entries with underspecified coronal stops (tin), but lexical entries with specified labial stops (pin) are not activated by mismatching input (tin). The eye-tracking data failed to show such a pattern. Although words that were phonologically similar to the spoken target attracted more looks than did unrelated distractors, this effect was symmetric in Experiment 1 with minimal pairs (tin–pin) and in Experiments 2 and 3 with words with an onset overlap (peacock–teacake). Experiment 4 revealed that /t/-initial words were looked at more frequently if the spoken input mismatched only in terms of place than if it mismatched in place and voice, contrary to the assumption that /t/ is unspecified for place and voice. These results show that speech perception uses signal-driven information to the fullest, as was predicted by an optimal perception account. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

13.
Three experiments in Serbo-Croatian were conducted on the effects of phonological ambiguity and lexical ambiguity on printed word recognition. Subjects decided rapidly if a printed and a spoken word matched or not. Printed words were either phonologically ambiguous (two possible pronunciations) or unambiguous. If phonologically ambiguous, either both pronunciations were real words or only one was, the other being a nonword. Spoken words were necessarily unambiguous. Half the spoken words were auditorily degraded. In addition, the relative onsets of speech and print were varied. Speed of matching print to speech was slowed by phonological ambiguity, and the effect was amplified when the stimulus was also lexically ambiguous. Auditory degradation did not interact with print ambiguity, suggesting the perception of the spoken word was independent of the printed word. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
In reading, do people access word meaning by looking up the mental lexicon orthographically or by first converting spelling to sound and then accessing the lexicon phonologically? In Experiment 1, participants read a pair of words (e.g., experimental pair: lion-bare, control pair: lion-bean) and decided which member of the word pair was related in meaning to a third word (e.g., wolf). Error rates and reaction times were worse on the experimental pairs with homophones as distractors than on the control pairs, indicating that inappropriate lexical entries were accessed by homophones via the phonological route. In Experiments 2 and 3, when a delay was imposed between the word pair and the third word, the phonologically mediated interference effect disappeared at a stimulus onset asynchrony of 300-400 ms, indicating that the wrongly activated lexical entries were later inhibited, apparently via the orthographic route. A revised dual-route model that emphasizes phonological recoding is proposed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
The time course of spoken word recognition depends largely on the frequencies of a word and its competitors, or neighbors (similar-sounding words). However, variability in natural lexicons makes systematic analysis of frequency and neighbor similarity difficult. Artificial lexicons were used to achieve precise control over word frequency and phonological similarity. Eye tracking provided time course measures of lexical activation and competition (during spoken instructions to perform visually guided tasks) both during and after word learning, as a function of word frequency, neighbor type, and neighbor frequency. Apparent shifts from holistic to incremental competitor effects were observed in adults and neural network simulations, suggesting such shifts reflect general properties of learning rather than changes in the nature of lexical representations. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
The number and type of connections involving different levels of orthographic and phonological representations differentiate between several models of spoken and visual word recognition. At the sublexical level of processing, Borowsky, Owen, and Fonos (1999) demonstrated evidence for direct processing connections from grapheme representations to phoneme representations (i.e., a sensitivity effect) over and above any bias effects, but not in the reverse direction. Neural network models of visual word recognition implement an orthography to phonology processing route that involves the same connections for processing sublexical and lexical information, and thus a similar pattern of cross-modal effects for lexical stimuli are expected by models that implement this single type of connection (i.e., orthographic lexical processing should directly affect phonological lexical processing, but not in the reverse direction). Furthermore, several models of spoken word perception predict that there should be no direct connections between orthographic representations and phonological representations, regardless of whether the connections are sublexical or lexical... (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
The influence of addition and deletion neighbors on visual word identification was investigated in four experiments. Experiments 1 and 2 used Spanish stimuli. In Experiment 1, lexical decision latencies were slower and less accurate for words and nonwords with higher-frequency deletion neighbors (e.g., jugar in juzgar), relative to control stimuli. Experiment 2 showed a similar interference effect for words and nonwords with higher-frequency addition neighbors (e.g., conejo, which has the addition neighbor consejo), relative to control stimuli. Experiment 3 replicated this addition neighbor interference effect in a lexical decision experiment with English stimuli. Across all three experiments, interference effects were always evident for addition/deletion neighbors with word-outer overlap, usually present for those with word-initial overlap, but never present for those with word-final overlap. Experiment 4 replicated the addition/deletion neighbor inhibitory effects in a Spanish sentence reading task in which the participants’ eye movements were monitored. These findings suggest that conventional orthographic neighborhood metrics should be redefined. In addition to its methodological implications, this conclusion has significant theoretical implications for input coding schemes and the mechanisms underlying word recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
In 2 experiments, a boundary technique was used with parafoveal previews that were identical to a target (e.g., sleet), a word orthographic neighbor (sweet), or an orthographically matched nonword (speet). In Experiment 1, low-frequency words in orthographic pairs were targets, and high-frequency words were previews. In Experiment 2, the roles were reversed. In Experiment 1, neighbor words provided as much preview benefit as identical words and greater benefit than nonwords, whereas in Experiment 2, neighbor words provided no greater preview benefit than nonwords. These results indicate that the frequency of a preview influences the extraction of letter information without setting up appreciable competition between previews and targets. This is consistent with a model of word recognition in which early stages largely depend on excitation of letter information, and competition between lexical candidates becomes important only in later stages. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

19.
In responses time tasks, inhibitory neighborhood effects have been found for word pairs that differ in a transposition of two adjacent letters (e.g., clam/calm). Here, the author describes two eye-tracking experiments conducted to explore transposed-letter (TL) neighborhood effects within the context of normal silent reading. In Experiment 1, sentences contained a target word that either has a TL neighbor (e.g., angel, which has the TL neighbor angle) or does not (e.g., alien). In Experiment 2, the context was manipulated to examine whether semantic constraints attenuate neighborhood effects. Readers took longer to process words that have a TL neighbor than control words but only when either member of the TL pair was likely. Furthermore, this interference effect occurred very late in processing and was not affected by relative word frequency. These interference effects can be explained either by the spreading of activation from the target word to its TL neighbor or by the misidentification of target words for their TL neighbors. Implications for models of orthographic input coding and models of eye-movement control are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Clustering coefficient—a measure derived from the new science of networks—refers to the proportion of phonological neighbors of a target word that are also neighbors of each other. Consider the words bat, hat, and can, all of which are neighbors of the word cat; the words bat and hat are also neighbors of each other. In a perceptual identification task, words with a low clustering coefficient (i.e., few neighbors are neighbors of each other) were more accurately identified than words with a high clustering coefficient (i.e., many neighbors are neighbors of each other). In a lexical decision task, words with a low clustering coefficient were responded to more quickly than words with a high clustering coefficient. These findings suggest that the structure of the lexicon (i.e., the similarity relationships among neighbors of the target word measured by clustering coefficient) influences lexical access in spoken word recognition. Simulations of the TRACE and Shortlist models of spoken word recognition failed to account for the present findings. A framework for a new model of spoken word recognition is proposed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号