首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
"Nouns used by young English-speaking children were more reliably the names of things and their verbs more reliably the names of actions than… the nouns and verbs used by English-speaking adults. It was shown experimentally that young English-speaking children take the part-of-speech membership of a new word as a clue to the meaning of the word. In this way, they make use of the semantic distinctiveness of the parts of speech… . Differences between languages in their parts of speech may be diagnostic of differences in the cognitive psychologies of those who use languages." (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Memorializes Alvin M. Liberman, known for his work in the field of speech perception. He was a pioneer in the experimental study of speech, and he contributed a widely cited motor theory of speech perception. His early research on the failure to accurately perceive words presented in acoustic alphabets led to the discovery that speech is not composed of discrete, segment-sized units, and the acoustic structure for consonants and vowels is highly context sensitive. From this finding, he developed the motor theory of speech perception, in which perception of speech is a component of the human biological adaptation of language use. His sustained interest in this field of research expanded to include theories of reading difficulties and the discovery that children who are failing to learn to read on schedule characteristically lack phoneme awareness. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Studies comparing children's and adults' labeling of speech stimuli have repeatedly shown that children's phonological decisions are more strongly related to portions of the signal that involve rapid spectral change (i.e., formant transitions) and less related to other signal components than are adults' decisions. Such findings have led to a model termed the Developmental Weighting Shift, which suggests that children initially assign particularly strong weight to formant transitions to help delimit individual words in the continuous speech stream but gradually modify these strategies to be more like those of adults as they learn about word-internal structure. The goal of the current study was to test a reasonable alternative: that these apparent age-related differences in perceptual weighting strategies for speech are instead due to age-related differences in auditory sensitivity. To this end, difference limens (DLs) were obtained from children (ages 5 and 7 years) and adults for three types of acoustic properties: dynamic-spectral, static-spectral, and temporal. Two testable hypotheses were offered: Labeling results could reflect either absolute differences in sensitivity between children and adults or relative differences in sensitivity within each group. Empirical support for either hypothesis would indicate that apparent developmental changes in perceptual weighting strategies are actually due to developmental changes in auditory sensitivity to acoustic properties. Results of this study contradicted predictions of both hypotheses, sustaining the suggestion that children's perceptual weighting strategies for speech-relevant acoustic properties change as they gain experience with a native language.  相似文献   

4.
People were trained to decode noise-vocoded speech by hearing monosyllabic stimuli in distorted and unaltered forms. When later presented with different stimuli, listeners were able to successfully generalize their experience. However, generalization was modulated by the degree to which testing stimuli resembled training stimuli: Testing stimuli's consonants were easier to recognize when they had occurred in the same position at training, or flanked by the same vowel, than when they did not. Furthermore, greater generalization occurred when listeners had been trained on existing words than on nonsense strings. We propose that the process by which adult listeners learn to interpret distorted speech is akin to building phonological categories in one's native language, a process where categories and structure emerge from the words in the ambient language without completely abstracting from them. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
The development of speech perception during the 1st year reflects increasing attunement to native language features, but the mechanisms underlying this development are not completely understood. One previous study linked reductions in nonnative speech discrimination to performance on nonlinguistic tasks, whereas other studies have shown associations between speech perception and vocabulary growth. The present study examined relationships among these abilities in 11-month-old infants using a conditioned head-turn test of native and nonnative speech sound discrimination, nonlinguistic object-retrieval tasks requiring attention and inhibitory control, and the MacArthur-Bates Communicative Development Inventory (L. Fenson et al., 1993). Native speech discrimination was positively linked to receptive vocabulary size but not to the cognitive control tasks, whereas nonnative speech discrimination was negatively linked to cognitive control scores but not to vocabulary size. Speech discrimination, vocabulary size, and cognitive control scores were not associated with more general cognitive measures. These results suggest specific relationships between domain-general inhibitory control processes and the ability to ignore variation in speech that is irrelevant to the native language and between the development of native language speech perception and vocabulary. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Examined speech addressed to different categories of listeners in a study in which 80 undergraduate women taught a block design task to either a 5–7 yr old girl (n?=?6), a retarded adult (4 women, aged 20–33 yrs), a peer who spoke English as a 2nd language (4 adult women [foreigners]), or a peer who was an unimpaired native speaker of English (2 women undergraduates). Speech addressed to children differed from the speech addressed to native adults along every major dimension. It was clearer, simpler, more attention maintaining, and included longer pauses. Speech addressed to retarded adults was similar to speech addressed to 6-yr-olds. Speech to the retarded adults did differ in timing from the other styles of speaking in that it included fewer and somewhat shorter pauses. Speech addressed to foreigners was more repetitive than speech addressed to native speakers, but in all other ways it was similar. Results show that speakers fine-tuned their communications to the level of cognitive and linguistic sophistication of their listener. The hypothesis that baby talk (the speech addressed to children) is a prototypical special speech register from which other special registers are derived is discussed. (66 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Tested 12 English-speaking children at each of 3 ages (4, 8, and 12 yrs) on 3 speech contrasts of English and Hindi sounds. As a control procedure, 2 bilingual Hindi–English speaking children (aged 4 and 5 yrs) were also tested. Results show that the loss of ability to discriminate the nonnative (Hindi) speech contrasts was evident by 4 yrs of age, suggesting that important reorganizations in linguistic perceptual abilities occur in early childhood. Results support the notion that learning a 2nd language may not necessarily be easier in young childhood. (French abstract) (31 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Dutch listeners were exposed to the English theta sound (as in bath), which replaced [f] in /f/-final Dutch words or, for another group, [s] in /s/-final words. A subsequent identity-priming task showed that participants had learned to interpret theta as, respectively, /f/ or /s/. Priming effects were equally strong when the exposure sound was an ambiguous [fs]-mixture and when primes contained unambiguous fricatives. When the exposure sound was signal-correlated noise, listeners interpreted it as the spectrally similar /f/, irrespective of lexical bias during exposure. Perceptual learning about speech is thus constrained by spectral similarity between the input and established phonological categories, but within those limits, adjustments are thorough enough that even nonnative sounds can be treated fully as native sounds. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Examined the influence of prior training and linguistic experience on the perception of nonnative speech in 2 experiments. Exp I assessed the effect of laboratory training on the ability of 30 English-speaking adults (aged 18–35 yrs) to discriminate 2 speech contrasts that are used to contrast meaning in Hindi but not in English. Short-term training resulted in an amelioration of the initial poor performance of Ss in discriminating a nonnative voicing contrast, but training had no such effect in the case of a Hindi contrast involving a place of articulation distinction. In Exp II, the performance of 3 groups of English-speaking adults (aged 20–38 yrs)—Ss who had studied Hindi for 5 yrs or more, Ss who were studying Hindi as a 2nd language with early experience of Hindi, and Ss studying Hindi as a 2nd language with no early experience of Hindi—was examined to investigate the effect of studying Hindi as a 2nd language for different periods. Ss who had studied Hindi for at least 5 yrs discriminated both Hindi speech contrasts. While 1 yr of 2nd language experience also improved performance of Ss with no early Hindi experience on the voicing contrast, it had little influence on their ability to discriminate the Hindi place contrast. Ss who had early experience hearing the contrasts being used, but no further exposure, could discriminate both the voicing and place distinctions prior to language study. Findings are discussed in terms of the recovery and maintenance of linguistic perceptual ability. (French abstract) (26 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Two studies examined relationships between infants' early speech processing performance and later language and cognitive outcomes. Study 1 found that performance on speech segmentation tasks before 12 months of age related to expressive vocabulary at 24 months. However, performance on other tasks was not related to 2-year vocabulary. Study 2 assessed linguistic and cognitive skills at 4-6 years of age for children who had participated in segmentation studies as infants. Children who had been able to segment words from fluent speech scored higher on language measures, but not general IQ, as preschoolers. Results suggest that speech segmentation ability is an important prerequisite for successful language development, and they offer potential for developing measures to detect language impairment at an earlier age. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
The language environment modifies the speech perception abilities found in early development. In particular, adults have difficulty perceiving many nonnative contrasts that young infants discriminate. The underlying perceptual reorganization apparently occurs by 10–12 months. According to one view, it depends on experiential effects on psychoacoustic mechanisms. Alternatively, phonological development has been held responsible, with perception influenced by whether the nonnative sounds occur allophonically in the native language. We hypothesized that a phonemic process appears around 10–12 months that assimilates speech sounds to native categories whenever possible; otherwise, they are perceived in auditory or phonetic (articulatory) terms. We tested this with English-speaking listeners by using Zulu click contrasts. Adults discriminated the click contrasts; performance on the most difficult (80% correct) was not diminished even when the most obvious acoustic difference was eliminated. Infants showed good discrimination of the acoustically modified contrast even by 12–24 months. Together with earlier reports of developmental change in perception of nonnative contrasts, these findings support a phonological explanation of language-specific reorganization in speech perception. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Previous research indicates that multiple levels of linguistic information play a role in the perception and discrimination of non-native phonemes. This study examines the interaction of phonetic, phonemic and phonological factors in the discrimination of non-native phonotactic contrasts. Listeners of Catalan, English, and Russian are presented with an initial #CC-#C?C contrast in a discrimination task. For the Catalan group, the phonemes and their phonetic implementation were native, but the #CC phonotactics were not. For Russian listeners, the phonemes and phonetic implementation were not native but Russian allows a large number of #CC sequences. For English listeners, none of the phonetics, phonemes, nor phonotactics are native. Two task variables, stimuli length and order of presentation, were also manipulated. Results showed that the Russian listeners were most accurate overall, suggesting that the presence of the phonotactic structure in the listeners' native language may be more important than either phonemic or phonetic information. The interaction between the task manipulations and the linguistic variables is also addressed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
In a cross-modal matching task, participants were asked to match visual and auditory displays of speech based on the identity of the speaker. The present investigation used this task with acoustically transformed speech to examine the properties of sound that can convey cross-modal information. Word recognition performance was also measured under the same transformations. The authors found that cross-modal matching was only possible under transformations that preserved the relative spectral and temporal patterns of formant frequencies. In addition, cross-modal matching was only possible under the same conditions that yielded robust word recognition performance. The results are consistent with the hypothesis that acoustic and optical displays of speech simultaneously carry articulatory information about both the underlying linguistic message and indexical properties of the talker. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Recent work demonstrates that learning to understand noise-vocoded (NV) speech alters sublexical perceptual processes but is enhanced by the simultaneous provision of higher-level, phonological, but not lexical content (Hervais-Adelman, Davis, Johnsrude, & Carlyon, 2008), consistent with top-down learning (Davis, Johnsrude, Hervais-Adelman, Taylor, & McGettigan, 2005; Hervais-Adelman et al., 2008). Here, we investigate whether training listeners with specific types of NV speech improves intelligibility of vocoded speech with different acoustic characteristics. Transfer of perceptual learning would provide evidence for abstraction from variable properties of the speech input. In Experiment 1, we demonstrate that learning of NV speech in one frequency region generalizes to an untrained frequency region. In Experiment 2, we assessed generalization among three carrier signals used to create NV speech: noise bands, pulse trains, and sine waves. Stimuli created using these three carriers possess the same slow, time-varying amplitude information and are equated for na?ve intelligibility but differ in their temporal fine structure. Perceptual learning generalized partially, but not completely, among different carrier signals. These results delimit the functional and neural locus of perceptual learning of vocoded speech. Generalization across frequency regions suggests that learning occurs at a stage of processing at which some abstraction from the physical signal has occurred, while incomplete transfer across carriers indicates that learning occurs at a stage of processing that is sensitive to acoustic features critical for speech perception (e.g., noise, periodicity). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
This study examined the roles of speech perception and phonological processing in reading and spelling acquisition for native and nonnative speakers of English in the 1st grade. The performance of 50 children (23 native English speakers and 27 native Korean speakers) was examined on tasks assessing reading and spelling, phonological processing, speech perception, and receptive vocabulary at the start and end of the school year. Korean-speaking children outperformed native English speakers on each of the literacy measures at the start and end of 1st grade, despite differences in their initial phonological representations and processing skills. Furthermore, speech perception and phonological processing were important contributors to early literacy skills, independent of oral language skills, for children from both language groups. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
In 5 experiments, the authors investigated how listeners learn to recognize unfamiliar talkers and how experience with specific utterances generalizes to novel instances. Listeners were trained over several days to identify 10 talkers from natural, sinewave, or reversed speech sentences. The sinewave signals preserved phonetic and some suprasegmental properties while eliminating natural vocal quality. In contrast, the reversed speech signals preserved vocal quality while distorting temporally based phonetic properties. The training results indicate that listeners learned to identify talkers even from acoustic signals lacking natural vocal quality. Generalization performance varied across the different signals and depended on the salience of phonetic information. The results suggest similarities in the phonetic attributes underlying talker recognition and phonetic perception. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
When presented with several time-compressed sentences, young adults' performance improves with practice. Such adaptation has not been studied in older adults. To study age-related changes in perceptual learning, the authors tested young and older adults' ability to adapt to degraded speech. First, the authors showed that older adults, when equated for starting accuracy with young adults, adapted at a rate and magnitude comparable to young adults. However, unlike young adults, older adults failed to transfer this learning to a different speech rate and did not show additional benefit when practice exceeded 20 sentences. Listeners did not adapt to speech degraded by noise, indicating that adaptation to time-compressed speech was not attributable to task familiarity. Finally, both young and older adults adapted to spectrally shifted noise-vocoded speech. The authors conclude that initial perceptual learning is comparable in young and older adults but maintenance and transfer of this learning decline with age. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Domain-specific systems are hypothetically specialized with respect to the outputs they compute and the inputs they allow (Fodor, 1983). Here, we examine whether these 2 conditions for specialization are dissociable. An initial experiment suggests that English speakers could extend a putatively universal phonological restriction to inputs identified as nonspeech. A subsequent comparison of English and Russian participants indicates that the processing of nonspeech inputs is modulated by linguistic experience. Striking, qualitative differences between English and Russian participants suggest that they rely on linguistic principles, both universal and language-particular, rather than generic auditory processing strategies. Thus, the computation of idiosyncratic linguistic outputs is apparently not restricted to speech inputs. This conclusion presents various challenges to both domain-specific and domain-general accounts of cognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Two talkers' productions of the same phoneme may be quite different acoustically, whereas their productions of different speech sounds may be virtually identical. Despite this lack of invariance in the relationship between the speech signal and linguistic categories, listeners experience phonetic constancy across a wide range of talkers, speaking styles, linguistic contexts, and acoustic environments. The authors present evidence that perceptual sensitivity to talker variability involves an active cognitive mechanism: Listeners expecting to hear 2 different talkers differing only slightly in average pitch showed performance costs typical of adjusting to talker variability, whereas listeners hearing the same materials but expecting a single talker or given no special instructions did not show these performance costs. The authors discuss the implications for understanding phonetic constancy despite variability between talkers (and other sources of variability) and for theories of speech perception. The results provide further evidence for active, controlled processing in real-time speech perception and are consistent with a model of talker normalization that involves contextual tuning. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号