首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Three experiments in Serbo-Croatian were conducted on the effects of phonological ambiguity and lexical ambiguity on printed word recognition. Subjects decided rapidly if a printed and a spoken word matched or not. Printed words were either phonologically ambiguous (two possible pronunciations) or unambiguous. If phonologically ambiguous, either both pronunciations were real words or only one was, the other being a nonword. Spoken words were necessarily unambiguous. Half the spoken words were auditorily degraded. In addition, the relative onsets of speech and print were varied. Speed of matching print to speech was slowed by phonological ambiguity, and the effect was amplified when the stimulus was also lexically ambiguous. Auditory degradation did not interact with print ambiguity, suggesting the perception of the spoken word was independent of the printed word. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Two experiments use rhyme priming techniques to explore the decision space for lexical access. The 1st experiment, using intramodal (auditory-auditory) priming, covaried the phonological distance of a spoken rhyme prime (e.g., pomato) from its source word (e.g., tomato) with the presence or absence of close lexical competitors. The results showed strong effects of phonological distance and no significant effects of competitor environment. The 2nd experiment, using ambiguous rhyme primes in a cross-modal (auditory-visual) priming task, showed that phonetically ambiguous primes could fully match their source words, but only in the appropriate lexical environment. The results support a view of lexical access in which the listener's perceptual experience is based on strict requirements for a bottom-up match with the speech input, and in which competitor environment does not directly modulate the on-line goodness-of-fit computation.  相似文献   

3.
What form is the lexical phonology that gives rise to phonological effects in visual lexical decision? The authors explored the hypothesis that beyond phonological contrasts the physical phonetic details of words are included. Three experiments using lexical decision and 1 using naming compared processing times for printed words (e.g., plead and pleat) that differ, when spoken, in vowel length and overall duration. Latencies were longer for long-vowel words than for short-vowel words in lexical decision but not in naming. Further, lexical decision on long-vowel words benefited more from identity priming than lexical decision on short-vowel words, suggesting that representations of long-vowel words achieve activation thresholds more slowly. The discussion focused on phonetically informed phonologies, particularly gestural phonology and its potential for understanding reading acquisition and performance. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Lexical decision and naming were examined with words and pseudowords in literary Arabic and with transliterations of words in a Palestinian dialect that has no written form. Although the transliterations were visually unfamiliar, they were not easily rejected in lexical decision, and they were more slowly accepted in phonologically based lexical decision. Naming transliterations of spoken words was slower than naming of literary words and pseudowords. Apparently, phonological computation is mandatory for both lexical decision and naming. A large frequency effect in both lexical decision and naming suggests that addressed phonology is an option for familiar orthographic patterns. The frequency effect on processing transliterations indicated that lexical phonology is involved with prelexical phonological computation even if addressed phonology is not possible. These data support a combination between a cascade-type process, in which partial products of the grapheme-to-phoneme translation activate phonological units in the lexicon, and an interactive model, in which the activated lexical units feed back, shaping the prelexical phonological computation process. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
In reading, do people access word meaning by looking up the mental lexicon orthographically or by first converting spelling to sound and then accessing the lexicon phonologically? In Experiment 1, participants read a pair of words (e.g., experimental pair: lion-bare, control pair: lion-bean) and decided which member of the word pair was related in meaning to a third word (e.g., wolf). Error rates and reaction times were worse on the experimental pairs with homophones as distractors than on the control pairs, indicating that inappropriate lexical entries were accessed by homophones via the phonological route. In Experiments 2 and 3, when a delay was imposed between the word pair and the third word, the phonologically mediated interference effect disappeared at a stimulus onset asynchrony of 300-400 ms, indicating that the wrongly activated lexical entries were later inhibited, apparently via the orthographic route. A revised dual-route model that emphasizes phonological recoding is proposed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Clustering coefficient—a measure derived from the new science of networks—refers to the proportion of phonological neighbors of a target word that are also neighbors of each other. Consider the words bat, hat, and can, all of which are neighbors of the word cat; the words bat and hat are also neighbors of each other. In a perceptual identification task, words with a low clustering coefficient (i.e., few neighbors are neighbors of each other) were more accurately identified than words with a high clustering coefficient (i.e., many neighbors are neighbors of each other). In a lexical decision task, words with a low clustering coefficient were responded to more quickly than words with a high clustering coefficient. These findings suggest that the structure of the lexicon (i.e., the similarity relationships among neighbors of the target word measured by clustering coefficient) influences lexical access in spoken word recognition. Simulations of the TRACE and Shortlist models of spoken word recognition failed to account for the present findings. A framework for a new model of spoken word recognition is proposed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
The relative time course of semantic and phonological activation was investigated in the context of whether phonology mediates access to lexical representations in reading Chinese. Compound words (Experiment 1) and single-character words (Experiments 2 and 3) were preceded by semantic and phonological primes. Strong semantic priming effects were found at both short (57 ms) and long (200 ms) stimulus onset asynchrony (SOA), but phonological effects were either absent in lexical decision (Experiment 1), were present only at the longer SOA in character decision (Experiment 2) or were equally strong as semantic effects in naming (Experiment 3). Experiment 4 revealed facilitatory or inhibitory effects, depending on SOA, in phonological judgments to character pairs that were not phonologically but semantically related. It was concluded that, in reading Chinese, semantic information in the lexicon is activated at least as early and just as strongly as phonological information. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Tested the assumption that word-frequency effects on recognition result from differential ease of access to lexical entries for high- and low-frequency words. Previous researchers (McCann & Besner, 1987) found that pseudohomophones (e.g., TRAX) were named faster and more accurately than controls (e.g., PRAX), but pseudohomophone performance was not sensitive to base word frequency. In Exp 1 of the present series, performance on the same set of pseudohomophones and controls was assessed in the context of the lexical decision task (does this letter string spell a word?). Pseudohomophone performance was impaired relative to controls, which is commonly taken as evidence of contact with entries in a phonological lexicon. As in the naming task, however, pseudohomophone performance was insensitive to base word frequency. In Exp 2, pseudohomophone performance was examined in the context of a phonological lexical decision task (does this letter string sound like an English word?). Pseudohomophone performance was sensitive to base word frequency in phonological lexical decision. Word-frequency effects in binary decision tasks such as lexical decision and phonological lexical decision are attributed to a familiarity discriminatiion process that contributes bias to the decision stage. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
The number and type of connections involving different levels of orthographic and phonological representations differentiate between several models of spoken and visual word recognition. At the sublexical level of processing, Borowsky, Owen, and Fonos (1999) demonstrated evidence for direct processing connections from grapheme representations to phoneme representations (i.e., a sensitivity effect) over and above any bias effects, but not in the reverse direction. Neural network models of visual word recognition implement an orthography to phonology processing route that involves the same connections for processing sublexical and lexical information, and thus a similar pattern of cross-modal effects for lexical stimuli are expected by models that implement this single type of connection (i.e., orthographic lexical processing should directly affect phonological lexical processing, but not in the reverse direction). Furthermore, several models of spoken word perception predict that there should be no direct connections between orthographic representations and phonological representations, regardless of whether the connections are sublexical or lexical... (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
The word length effect, the finding that lists of short words are better recalled than lists of long words, has been termed one of the benchmark findings that any theory of immediate memory must account for. Indeed, the effect led directly to the development of working memory and the phonological loop, and it is viewed as the best remaining evidence for time-based decay. However, previous studies investigating this effect have confounded length with orthographic neighborhood size. In the present study, Experiments 1A and 1B revealed typical effects of length when short and long words were equated on all relevant dimensions previously identified in the literature except for neighborhood size. In Experiment 2, consonant–vowel–consonant (CVC) words with a large orthographic neighborhood were better recalled than were CVC words with a small orthographic neighborhood. In Experiments 3 and 4, using two different sets of stimuli, we showed that when short (1-syllable) and long (3-syllable) items were equated for neighborhood size, the word length effect disappeared. Experiment 5 replicated this with spoken recall. We suggest that the word length effect may be better explained by the differences in linguistic and lexical properties of short and long words rather than by length per se. These results add to the growing literature showing problems for theories of memory that include decay offset by rehearsal as a central feature. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

11.
12.
Previous research on the effects of age of acquisition on lexical processing has relied on adult estimates of the age at which children learn words. The authors report 2 experiments in which effects of age of acquisition on lexical retrieval are demonstrated using real age-of-acquisition norms. In Experiment 1, real age of acquisition emerged as a powerful predictor of adult object-naming speed. There were also significant effects of visual complexity, word frequency, and name agreement. Similar results were obtained in reanalyses of data from 2 other studies of object naming. In Experiment 2, real age of acquisition affected immediate but not delayed object-naming speed. The authors conclude that age-of-acquisition effects are real and suggest that age of acquisition influences the speed with which spoken word forms can be retrieved from the phonological lexicon.  相似文献   

13.
Models of speech perception attribute a different role to contextual information in the processing of assimilated speech. This study concerned perceptual processing of regressive voice assimilation in French. This phonological variation is asymmetric in that assimilation is partial for voiced stops and nearly complete for voiceless stops. Two auditory-visual cross-modal form priming experiments were used to examine perceptual compensation for assimilation in French words with voiceless versus voiced stop offsets. The results show that, for the former segments, assimilating context enhances underlying form recovery, whereas it does not for the latter. These results suggest that two sources of information--contextual information and bottom-up information from the assimilated forms themselves--are complementary and both come into play during the processing of fully or partially assimilated word forms. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
The relationship between semantic–syntactic and phonological levels in speaking was investigated using a picture naming procedure with simultaneously presented visual or auditory distractor words. Previous results with auditory distractors have been used to support the independent stage model (e.g., H. Schriefers, A. S. Meyer, & W. J. M. Levelt, 1990), whereas results with visual distractors have been used to support an interactive view (e.g., P. A. Starreveld & W. La Heij, 1996). Experiment 1 demonstrated that with auditory distractors, semantic effects preceded phonological effects, whereas the reverse pattern held for visual distractors. Experiment 2 indicated that the results for visual distractors followed the auditory pattern when distractor presentation time was limited. Experiment 3 demonstrated an interaction between phonological and semantic relatedness of distractors for auditory presentation, supporting an interactive account of lexical access in speaking. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
We studied the processing of two word strings in French made up of a determiner and a noun which contains a schwa (mute e). Depending on the noun, schwa deletion is present, optional or absent. In a production study, we show that schwa deletion, and the category of the noun, have a large impact on the duration of the strings. We take this into account in two perception studies, which use word repetition and lexical decision, and which show that words in which the schwa has been deleted usually take longer to recognize than words that retain the schwa, but that this depends also on the category of the word. We explain these results by examining the influence of orthography. Based on the model proposed by Grainger and Ferrand (1996), which integrates the written dimension, we suggest that two sources of information, phonological and orthographic, interact during spoken word recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
The present study investigated the role of emotional tone of voice in the perception of spoken words. Listeners were presented with words that had either a happy, sad, or neutral meaning. Each word was spoken in a tone of voice (happy, sad, or neutral) that was congruent, incongruent, or neutral with respect to affective meaning, and naming latencies were collected. Across experiments, tone of voice was either blocked or mixed with respect to emotional meaning. The results suggest that emotional tone of voice facilitated linguistic processing of emotional words in an emotion-congruent fashion. These findings suggest that information about emotional tone is used in the processing of linguistic content influencing the recognition and naming of spoken words in an emotion-congruent manner. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Two visual-world experiments evaluated the time course and use of orthographic information in spoken-word recognition using printed words as referents. Participants saw 4 words on a computer screen and listened to spoken sentences instructing them to click on one of the words (e.g., Click on the word bead). The printed words appeared 200 ms before the onset of the spoken target word. In Experiment 1, the display included the target word and a competitor with either a lower degree (e.g., bear) or a higher degree (e.g., bean) of phonological overlap with the target. Both competitors had the same degree of orthographic overlap with the target. There were more fixations to the competitors than to unrelated distractors. Crucially, the likelihood of fixating a competitor did not vary as a function of the amount of phonological overlap between target and competitor. In Experiment 2, the display included the target word and a competitor with either a lower degree (e.g., bare) or a higher degree (e.g., bear) of orthographic overlap with the target. Competitors were homophonous and thus had the same degree of phonological overlap with the target. There were more fixations to higher overlap competitors than to lower overlap competitors, beginning during the temporal interval where initial fixations driven by the vowel are expected to occur. The authors conclude that orthographic information is rapidly activated as a spoken word unfolds and is immediately used in mapping spoken words onto potential printed referents. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
The present study was designed to examine age differences in the ability to use voice information acquired intentionally (Experiment 1) or incidentally (Experiment 2) as an aid to spoken word identification. Following both implicit and explicit voice learning, participants were asked to identify novel words spoken either by familiar talkers (ones they had been exposed to in the training phase) or by 4 unfamiliar voices. In both experiments, explicit memory for talkers' voices was significantly lower in older than in young listeners. Despite this age-related decline in voice recognition, however, older adults exhibited equivalent, and in some cases greater, benefit than young listeners from having words spoken by familiar talkers. Implications of the findings for age-related changes in explicit versus implicit memory systems are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Are the phonological representations of printed and spoken words isomorphic? This question is addressed by investigating the restrictions on onsets. Cross-linguistic research suggests that onsets of rising sonority are preferred to sonority plateaus, which, in turn, are preferred to sonority falls (e.g., bnif, bdif, lbif). Of interest is whether these grammatical preferences constrain the recognition of auditory and printed words by speakers of English--a language in which such onsets are unattested. Five experiments compare phonological lexical decision responses to nonwords, including unattested onsets, through either aural or visual presentation. Results suggest that both hearers and readers are sensitive to the phonotactics of unattested onsets. However, the phonotactic generalizations of hearers and readers differ on their scope and source. Hearers differentiated all three types of onsets (e.g., bnif, bdif, and lbif), and their behavior implicated both grammatical and statistical constraints. In contrast, readers were able to differentiate only those structures similar to attested English onsets from dissimilar structures (i.e., bnif vs. bdif or lbif), and their preferences reflected statistical knowledge alone. These findings suggest that the phonological representations informing lexical decision to spoken and printed words are not isomorphic. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Investigated the pseudohomophone effect, which is considered to be evidence that phonological recoding occurs in the lexical decision task in which a letter string like brane is identified as a nonword. 22 undergraduates read 156 letter strings, half of which were words, and identified them as words or nonwords. Half of the nonwords were pseudohomophones like brane, which sounds like a real word but is not spelled like one; half were strings like slint, which neither looks nor sounds like a real word. Response time to pseudohomophones was slower than response time to other nonwords. The interpretation of this result is that the letter string brane is transformed into a phonological code that accesses the entry for brain in a phonological lexicon, thus necessitating a time-consuming spelling check to avoid making a false positive response. Since letter strings like slint have no lexical entries, a postaccess spelling check is not necessary. Thus, the pseudohomophone effect reflects phonological processing. (French abstract) (11 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号