首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
This study investigated vowel length discrimination in infants from 2 language backgrounds, Japanese and English, in which vowel length is either phonemic or nonphonemic. Experiment 1 revealed that English 18-month-olds discriminate short and long vowels although vowel length is not phonemically contrastive in English. Experiments 2 and 3 revealed that Japanese 18-month-olds also discriminate the pairs but in an asymmetric manner: They detected only the change from long to short vowel, but not the change in the opposite direction, although English infants in Experiment 1 detected the change in both directions. Experiment 4 tested Japanese 10-month-olds and revealed a symmetric pattern of discrimination similar to that of English 18-month-olds. Experiment 5 revealed that native adult Japanese speakers, unlike Japanese 18-month-old infants who are presumably still developing phonological perception, ultimately acquire a symmetrical discrimination pattern for the vowel contrasts. Taken together, our findings suggest that English 18-month-olds and Japanese 10-month-olds perceive vowel length using simple acoustic?phonetic cues, whereas Japanese 18-month-olds perceive it under the influence of the emerging native phonology, which leads to a transient asymmetric pattern in perception. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Hebrew and Arabic are Semitic languages with a similar morphological structure and orthographies that differ in visual complexity. Two experiments explored the interaction of the characteristics of orthography and hemispheric abilities on lateralized versions of a letter-matching task (Experiment 1) and a global-local task (Experiment 2). In Experiment 1, native Hebrew readers and native Arabic readers fluent in Hebrew matched letters in the 2 orthographies. The results support the hypothesis that Arabic orthography is more difficult than Hebrew orthography for participants who can read both languages and that this difficulty has its strongest effects in the left visual field. In Experiment 2, native Arabic speakers performed a global-local letter detection task with Arabic letters with 2 types of inconsistent stimuli: different and similar. The results support the hypothesis that the right hemisphere of skilled Arabic readers cannot distinguish between similar Arabic letters, whereas the left hemisphere can. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Four experiments explored the effects of specific language characteristics on hemispheric functioning in reading nonwords using a lateralized trigram identification task. Experiment 1 tested whether this is due to the test language or to the native language of the participants. Results showed that native language had a stronger effect on hemispheric strategies than test language. Experiment 2 showed that latency to target letters in the CVCs revealed the same asymmetry as qualitative errors for Hebrew speakers but not for English speakers and that exposure duration of the stimuli affected misses differentially according to letter position. Experiment 3 used number trigrams to equate reading conventions in the 2 languages. Qualitative error scores still revealed opposing asymmetry patterns. Experiments 1-3 used vertical presentations. Exp 4 used horizontal presentation, which eliminated sequential processing in both hemispheres in Hebrew speakers, whereas English speakers still showed sequential processing in both hemispheres. Comparison of the 2 presentations suggests that stimulus arrangement affected qualitative errors in the left visual field but not the RVF for English speakers and in both visual fields for Hebrew speakers.… (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Speech production studies have shown that the phonological form of a word is made up of phonemic segments in stress-timed languages (e.g., Dutch) and of syllables in syllable-timed languages (e.g., Chinese). To clarify the functional unit of mora-timed languages, the authors asked native Japanese speakers to perform an implicit priming task (A. S. Meyer, 1990, 1991). In Experiment 1, participants could speed up their production latencies when initial consonant and vowel (CV) of a target word were known in advance but failed to do so when the vowel was unknown. In Experiment 2, prior knowledge of the consonant and glide (Cj) produced no significant priming effect. However, in Experiment 3, significant effects were found for the consonant-vowel coupled with a nasal coda (CVN) and the consonant with a diphthong (CVV), compared with the consonant-vowel alone (CV). These results suggest that the implicit priming effects for Japanese are closely related to the CV-C and CV-V structure, called the mora. The authors discuss cross-linguistic differences in the phonological representation involved in phonological encoding, within current theories of word production. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
We examined the possible relevance of locus equations to human production and perception of stop consonants. The orderly output constraint (OOC) of Sussman, Fruchter, and Cable (1995) claims that humans have evolved to produce speech such that F2 at consonant release and F2 at vowel midpoint are linearly related for consonants so that developing perceptual systems can form representations in an F2ons-by-F2vowel space. The theory claims that this relationship described by locus equations can distinguish consonants, and that the linearity of locus equations is captured in neural representations and is thus perceptually relevant. We investigated these claims by testing how closely locus equations reflect the production and perception of stop consonants. In Experiment 1, we induced speakers to change their locus equation slope and intercept parameters systematically, but found that consonants remained distinctive in slope-by-intercept space. In Experiment 2, we presented stop-consonant syllables with their bursts removed to listeners, and compared their classification error matrices with the predictions of a model using locus equation prototypes and with those of an exemplar-based model that uses F2ons and F2vowel, but not locus equations. Both models failed to account for a large proportion of the variance in listeners' responses; the locus equation model was no better in its predictions than the exemplar model. These findings are discussed in the context of the OOC.  相似文献   

6.
Reports an error in "Language status and hemispheric involvement in reading: Evidence from trilingual Arabic speakers tested in Arabic, Hebrew, and English" by Raphiq Ibrahim and Zohar Eviatar (Neuropsychology, 2009[Mar], Vol 23[2], 240-254). The Arabic text appearing in the Appendix on page 254 of Table C1 did not reproduce accurately. The corrected table appears in the erratum. (The following abstract of the original article appeared in record 2009-02621-012.) This study explores the effects of language status on hemispheric involvement in lexical decision. The authors looked at the responses of native Arabic speakers in Arabic (L1 for reading) and in two second languages (L2): Hebrew, which is similar to L1 in morphological structure, and English, which is very different from L1. Two groups of Arabic speakers performed lateralized lexical decision tasks in the three languages, using unilateral presentations and bilateral presentations. These paradigms allowed us to infer both hemispheric specialization and interhemispheric communication in the three languages, and the effects of language status (native vs. nonnative) and similarity on hemispheric patterns of responses. In general the authors show an effect of language status in the right visual field (RVF), reflecting the greater facility of the left hemisphere (LH) in recognizing words in the participant's native Arabic than in their other languages. The participants revealed similar patterns of interhemispheric integration across the languages, with more integration occurring for words than for nonwords. Both hemispheres revealed sensitivity to morphological complexity, a pattern similar to that of native Hebrew readers and different from that of native English readers. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
This study examined the roles of speech perception and phonological processing in reading and spelling acquisition for native and nonnative speakers of English in the 1st grade. The performance of 50 children (23 native English speakers and 27 native Korean speakers) was examined on tasks assessing reading and spelling, phonological processing, speech perception, and receptive vocabulary at the start and end of the school year. Korean-speaking children outperformed native English speakers on each of the literacy measures at the start and end of 1st grade, despite differences in their initial phonological representations and processing skills. Furthermore, speech perception and phonological processing were important contributors to early literacy skills, independent of oral language skills, for children from both language groups. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
The present study investigated the perception and production of English /w/ and /v/ by native speakers of Sinhala, German, and Dutch, with the aim of examining how their native language phonetic processing affected the acquisition of these phonemes. Subjects performed a battery of tests that assessed their identification accuracy for natural recordings, their degree of spoken accent, their relative use of place and manner cues, the assimilation of these phonemes into native-language categories, and their perceptual maps (i.e., multidimensional scaling solutions) for these phonemes. Most Sinhala speakers had near-chance identification accuracy, Germans ranged from chance to 100% correct, and Dutch speakers had uniformly high accuracy. The results suggest that these learning differences were caused more by perceptual interference than by category assimilation; Sinhala and German speakers both have a single native-language phoneme that is similar to English /w/ and /v/, but the auditory sensitivities of Sinhala speakers make it harder for them to discern the acoustic cues that are critical to /w/-/v/ categorization. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
In three experiments, listeners detected vowel or consonant targets in lists of CV syllables constructed from five vowels and five consonants. Responses were faster in a predictable context (e.g., listening for a vowel target in a list of syllables all beginning with the same consonant) than in an unpredictable context (e.g., listening for a vowel target in a list of syllables beginning with different consonants). In Experiment 1, the listeners' native language was Dutch, in which vowel and consonant repertoires are similar in size. The difference between predictable and unpredictable contexts was comparable for vowel and consonant targets. In Experiments 2 and 3, the listeners' native language was Spanish, which has four times as many consonants as vowels; here effects of an unpredictable consonant context on vowel detection were significantly greater than effects of an unpredictable vowel context on consonant detection. This finding suggests that listeners' processing of phonemes takes into account the constitution of their language's phonemic repertoire and the implications that this has for contextual variability.  相似文献   

10.
This investigation examined the timing relationships of EMG activity underlying vowel production in 2 normal individuals and in 2 individuals with marked-to-severe apraxia of speech of approximately two-and-one-half years duration. The timing of lip muscle activity was investigated in monosyllabic words embedded in phrases and in syllable word stems as a function of changes in word length. Specifically, the onset and offset of EMG activity of lip muscles used for production of /u/ in the monosyllables and word stems were examined. The results revealed that the relative amounts of time devoted to onset and offset of EMG activity for lip rounding are disorganized in apraxia of speech. Word length appeared to affect the timing of the onset of muscle activity for both the normal speakers and the speakers with apraxia of speech. Word length also influenced the offset of muscle activity, but its effect was less systematic for the speakers with apraxia of speech. The findings suggest that termination of EMG activity may be at least as disturbed as the initiation of EMG activity in apraxia of speech.  相似文献   

11.
This study examined speaking-rate-induced spectral and temporal variability of F2 formant trajectories for target words produced in a carrier phrase at speaking rates ranging from fast to slow. F2 onset frequency measured at the first glottal pulse following the stop consonant release in target words was used to quantify the extent to which adjacent consonantal and vocalic gestures overlapped; F2 target frequency was operationally defined as the first occurrence of a frequency minimum or maximum following F2 onset frequency. Regression analyses indicated 70% of functions relating F2 onset and vowel duration were statistically significant. The strength of the effect was variable, however, and the direction of significant functions often differed from that predicted by a simple model of overlapping, sliding gestures. Results of a partial correlation analysis examining interrelationships among F2 onset, F2 target frequency, and vowel duration across the speaking rate range indicated that covariation of F2 target with vowel duration may obscure the relationship between F2 onset and vowel duration across rate. The results further suggested that a sliding based model of acoustic variability associated with speaking rate change only partially accounts for the present data, and that such a view accounts for some speakers' data better than others.  相似文献   

12.
The perception of consonant clusters that are phonotactically illegal word initially in English (e.g., /tl/,/sr/) was investigated to determine whether listeners' phonological knowledge of the language influences speech processing. Experiment 1 examined whether the phonotactic context effect (Massaro & Cohen, 1983), a bias toward hearing illegal sequences (e.g.,/tl/) as legal (e.g., /tr/), is more likely due to knowledge of the legal phoneme combinations in English or to a frequency effect. In Experiment 2, Experiment 1 was repeated with the clusters occurring word medially to assess whether phonotactic rules of syllabification modulate the phonotactic effect. Experiment 3 examined whether vowel epenthesis, another phonological process, might also affect listeners' perception of illegal sequences as legal by biasing them to hear a vowel between the consonants of the cluster (e.g., /talae/). Results suggest that knowledge of the phonotactically permissible sequences in English can affect phoneme processing in multiple ways.  相似文献   

13.
Three experiments examined the possibility that the perception of vowels involves an opponent organization. Five vowel sounds were chosen on the basis of the vowel circles mapped by Yilmaz (1967, 1968). Whispered vowels were recorded and consisted of two pairs of complementary vowels (/u/(hoot) and /?/ (hat), and /o/ (hoe) and /e/ (hay)) plus the most neutral English vowel (nearest to the centre of the vowel circle)/e/(the). Experiment 1 revealed that mixtures of complementary vowels were most often confused with the neutral vowel /e/. Experiment 2 revealed that complementary vowel mixtures and the neutral vowel /e/ were rated most similar of all pair types excluding those in which the same vowel was present in both members of the pair. The final experiment examined the perception of pure and mixed vowels after adaptation to pure vowels. Analogous to adaptation in colour perception, adaptation to a vowel resulted in a perceptual shift of the neutral vowel and vowel mixtures towards the complementary vowel. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
This study investigated sentence duration and voice onset time (VOT) of plosive consonants in words produced during simultaneous communication (SC) by inexperienced signers. Stimulus words embedded in a sentence were produced with speech only and produced with SC by 12 inexperienced sign language users during the first and last weeks of an introductory sign language course. Results indicated significant differences between the speech and SC conditions in sentence duration and VOT of initial plosives at both the beginning and the end of the class. Voiced/voiceless VOT contrasts were enhanced in SC but followed English voicing rules and varied appropriately with place of articulation. These results are consistent with previous findings regarding the influence of rate changes on the temporal fine structure of speech (Miller, 1987) and were similar to the voicing contrast results reported for clear speech by Picheny, Durlach, and Braida (1986) and for experienced signers using SC by Schiavetti, Whitehead, Metz, Whitehead, and Mignerey (1996).  相似文献   

15.
This study examined the relationship between morphological structure of languages and performance asymmetries of native speakers in lateralized tasks. In 2 experiments, native speakers of English (concatenative morphology stem plus affix) and of Hebrew and Arabic (nonconcatenative root plus word-form morphology) were presented with lateralized lexical decision tasks, in which the morphological structure of both words and nonwords was manipulated. In the 1st study, stimuli were presented unilaterally. In the 2nd study, 2 stimuli were presented bilaterally, and participants were cued to respond to 1 of them. Three different indexes of hemispheric integration were tested: processing dissociation, effects of distractor status, and the bilateral effect. Lateralization patterns in the 3 languages revealed both common and language-specific patterns. For English speakers, only the left hemisphere (LH) was sensitive to morphological structure, consistent with the hypothesis that the LH processes right visual field stimuli independently but that the right hemisphere uses LH abilities to process words in the left visual field. In Hebrew and Arabic, both hemispheres are sensitive to morphological structure, and interhemispheric transfer of information may be more symmetrical than in English. The relationship between universal and experience-specific effects on brain organization is discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
[Correction Notice: An erratum for this article was reported in Vol 24(6) of Neuropsychology (see record 2010-22445-001). The Arabic text appearing in the Appendix on page 254 of Table C1 did not reproduce accurately. The corrected table appears in the erratum.] This study explores the effects of language status on hemispheric involvement in lexical decision. The authors looked at the responses of native Arabic speakers in Arabic (L1 for reading) and in two second languages (L2): Hebrew, which is similar to L1 in morphological structure, and English, which is very different from L1. Two groups of Arabic speakers performed lateralized lexical decision tasks in the three languages, using unilateral presentations and bilateral presentations. These paradigms allowed us to infer both hemispheric specialization and interhemispheric communication in the three languages, and the effects of language status (native vs. nonnative) and similarity on hemispheric patterns of responses. In general the authors show an effect of language status in the right visual field (RVF), reflecting the greater facility of the left hemisphere (LH) in recognizing words in the participant's native Arabic than in their other languages. The participants revealed similar patterns of interhemispheric integration across the languages, with more integration occurring for words than for nonwords. Both hemispheres revealed sensitivity to morphological complexity, a pattern similar to that of native Hebrew readers and different from that of native English readers. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Studies involving human infants and monkeys suggest that experience plays a critical role in modifying how subjects respond to vowel sounds between and within phonemic classes. Experiments with human listeners were conducted to establish appropriate stimulus materials. Then, eight European starlings (Sturnus vulgaris) were trained to respond differentially to vowel tokens drawn from stylized distributions for the English vowels /i/ and /I/, or from two distributions of vowel sounds that were orthogonal in the F1-F2 plane. Following training, starlings' responses generalized with facility to novel stimuli drawn from these distributions. Responses could be predicted well on the bases of frequencies of the first two formants and distributional characteristics of experienced vowel sounds with a graded structure about the central "prototypical" vowel of the training distributions. Starling responses corresponded closely to adult human judgments of "goodness" for English vowel sounds. Finally, a simple linear association network model trained with vowels drawn from the avian training set provided a good account for the data. Findings suggest that little more than sensitivity to statistical regularities of language input (probability-density distributions) together with organizational processes that serve to enhance distinctiveness may accommodate much of what is known about the functional equivalence of vowel sounds.  相似文献   

18.
Positron emission tomography (PET) was used in a cross-linguistic study to compare pitch processing in native speakers of English, a nontone language, with those of Thai, a tone language. When discriminating pitch patterns in Thai words, only the Thai subjects showed activation in the left frontal operculum. Activation of this region near the classically defined Broca's area suggests that the brain recognizes functional properties, rather than simply acoustic properties, of complex auditory cues in accessing language-specific mechanisms in pitch perception.  相似文献   

19.
English male and female names have different phonological properties. This article examines 3 questions about this phenomenon: How informative is phonology about gender? Have English speakers learned this information? Does this knowledge affect name usage? Results from a connectionist model indicate that English phonology predicts name gender quite well. Experiments found that English speakers have learned these cues. For example, names were classified as male or female more quickly and accurately when they had phonologically typical properties. Further studies demonstrated that the evolution of names in this century was affected by how male or female they sounded and that knowledge of phonological cues to gender influences the perception and structure of brand names. Implications for stereotyping, individual differences, and language research are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
The perceptual salience of relative spectral change [Lahiri et al., J. Acoust. Soc. Am. 76, 391-404 (1984)] and formant transitions as cues to labial and alveolar/dental place of articulation was assessed in a conflicting cue paradigm. The prototype stimuli were produced by two English speakers. The stimuli with conflicting cues to place of articulation were created by altering the spectra of the signals so that the change in spectral energy from signal onset to voicing onset specified one place of articulation while the formant transitions specified the other place of articulation. Listeners' identification of these stimuli was determined principally by the information from formant transitions. This outcome provides no support for the view that the relative spectral change is a significant perceptual cue to stop consonant place of articulation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号