首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This study investigated sentence duration and voice onset time (VOT) of plosive consonants in words produced during simultaneous communication (SC) by inexperienced signers. Stimulus words embedded in a sentence were produced with speech only and produced with SC by 12 inexperienced sign language users during the first and last weeks of an introductory sign language course. Results indicated significant differences between the speech and SC conditions in sentence duration and VOT of initial plosives at both the beginning and the end of the class. Voiced/voiceless VOT contrasts were enhanced in SC but followed English voicing rules and varied appropriately with place of articulation. These results are consistent with previous findings regarding the influence of rate changes on the temporal fine structure of speech (Miller, 1987) and were similar to the voicing contrast results reported for clear speech by Picheny, Durlach, and Braida (1986) and for experienced signers using SC by Schiavetti, Whitehead, Metz, Whitehead, and Mignerey (1996).  相似文献   

2.
Among the contextual factors known to play a role in segmental perception are the rate at which the speech was produced and the lexical status of the item, that is, whether it is a meaningful word of the language. In a series of experiments on the word-initial /b/p/ voicing distinction, we investigated the conditions under which these factors operate during speech processing. The results indicated that under instructions of speeded responding, listeners could, on some trials, ignore some later occurring contextual information within the word that specified rate and lexical status. Importantly, however, they could not ignore speaking rate entirely. Although they could base their decision on only the early portion of the word, when doing so they treated the word as if it were physically short—that is to say, as if there were no later occurring information specifying a slower rate. This suggests that listeners always take account of rate when identifying the voicing value of a consonant, but precisely which information within the word is used to specify rate can vary with task demands. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Describes W. C. Bagley's (1900, 1901) research on the relation between sound and meaning in human speech perception. Using phonograph cylinders, Bagley presented Ss with spoken words, either individually or in sentences, that had been pronounced with a missing consonant sound. Ss, who were instructed to report only what they had heard, often restored words to their original form (i.e., heard the words as if they had been spoken correctly). Restorations were determined by the position of the missing sound in the word and the position of the word in the sentence. The pattern of results observed by Bagley and his conclusions about human speech perception find remarkable parallels in contemporary psycholinguistics. For example, Bagley explained his results in terms of the critical role of context in speech perception and the sequential use of sound in spoken-word recognition. Some of the main results of Bagley's research are compared to those obtained in more recent experiments. It is concluded that many of the most important insights about spoken-word recognition were first offered by Bagley. (30 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Wide individual differences in early word production characterize children learning the same language, but the role of specific adult input in this interchild variability is unknown. Sampling the speech of American, French, and Swedish mothers (5 in each language group) to their 1-yr-old children, this study analyzed the distribution of consonantal categories, word length, and final consonants in running speech, content words, initial consonant of content words, and target words (adult models of words attempted by the children) as well as the children's own early words (from age 9 mo to about 18 mo). Variability is greater in child words than adult speech, and individual mother–child dyads show no evidence of specific maternal influence on the phonetics of the child's speech. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Researchers have attempted to understand the cognitive processing used in spelling by looking at children's spelling errors. The authors examined 2 other types of data—children's self-reported verbal protocols and on-line measures of spelling latencies. Elementary school children spelled 3 types of common 4-letter words, consonant–consonant–vowel–consonant, consonant–vowel–consonant–consonant, and consonant–vowel–consonant–silent e. Correctly and incorrectly spelled words were analyzed as a function of word type, verbal report, and keystroke latencies. Different typing patterns emerged for strategic and automatic reports and for different word types. Children seemed to use a relatively sequential read-out from long-term memory when directly retrieving a spelling, whereas they used a consonant pair strategy for final consonant clusters when sounding out words. Implications for spelling instruction are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
How are the sounds of words represented in plans for speech production? In Experiment 1, subjects produced sequences of four CVCs as many times as possible in 8s. We varied the number of repetitions of the initial consonant, vowel, final consonant, CV, rhyme, and whole CVC each sequence required, and measured subjects' speaking rate. Subjects produced more CVCs when the final consonant or whole word was repeated, but were slowed when only initial sounds or CVs were repeated. Two other experiments replicate the location-based effects and extended them to bisyllabic words. We attribute the locational effects to competition between words that are formally similar, and specifically, to competition between discrepant phonemes in the two words to occupy a particular wordframe position. The fact that only discrepant initial, but not final sounds slow production suggests that phonemes are activated sequentially, from left to right.  相似文献   

7.
We propose that word recognition in continuous speech is subject to constraints on what may constitute a viable word of the language. This Possible-Word Constraint (PWC) reduces activation of candidate words if their recognition would imply word status for adjacent input which could not be a word--for instance, a single consonant. In two word-spotting experiments, listeners found it much harder to detect apple, for example, in fapple (where [f] alone would be an impossible word), than in vuffapple (where vuff could be a word of English). We demonstrate that the PWC can readily be implemented in a competition-based model of continuous speech recognition, as a constraint on the process of competition between candidate words; where a stretch of speech between a candidate word and a (known or likely) word boundary is not a possible word, activation of the candidate word is reduced. This implementation accurately simulates both the present results and data from a range of earlier studies of speech segmentation.  相似文献   

8.
How do infants learn the sound patterns of their native language? By the end of the 1st year, infants have acquired detailed aspects of the phonology and phonotactics of their input language. However, the structure of the learning mechanisms underlying this process is largely unknown. In this study, 9-month-old infants were given the opportunity to induce specific phonological patterns in 3 experiments in which syllable structure, consonant voicing position, and segmental position were manipulated. Infants were then familiarized with fluent speech containing words that either fit or violated these patterns. Subsequent testing revealed that infants rapidly extracted new phonological regularities and that this process was constrained such that some regularities were easier to acquire than others. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
In this study changes in upper lip and lower lip integrated electromyographic (IEMG) amplitude and temporal measures related to linguistic factors known for their influence on stuttering were investigated. Nonstuttering subjects first read and then verbalized sentences of varying length (sentence length factor), in which meaningless but phonologically appropriate character strings were varied in their position within the sentence (word position factor) and their size (word size factor). It was hypothesized that the production of stressed, vowel-rounding gestures of words in initial position, longer words, and words in longer sentences would be characterized by specific changes in IEMG amplitude that would reflect an increase in speech motor demands, intuitively defined as articulatory effort. Basically, the findings corroborated our assumptions, showing that words in sentence initial position have shorter word and vowel durations in combination with an increase in IEMG activity. Similarly, we found shorter vowel durations for longer words, and in sentence final position an increase in IEMG activity. For longer sentences we found a clear increase in speech rate, but contrary to our expectations a decrease in IEMG activity. It was speculated that this might relate to the use of a movement reduction strategy to allow higher speech rates with increased coarticulation. These findings were discussed both for their implications in normal speech production, as well as for their possible implications for explaining stuttering behavior. To this end our data can illustrate both why stutterers might run a higher risk of stuttering at these linguistic loci of stuttering, and why they might come up with a strategic solution to decrease the motor demands in speech production. The basic outcome of this study shows that higher order (linguistic) specifications can have clear effects on speech motor production.  相似文献   

10.
The links between spellings and sounds in a large set of English words with consonant–vowel–consonant phonological structure were examined. Orthographic rimes, or units consisting of a vowel grapheme and a final consonant grapheme, had more stable pronunciations than either individual vowels or initial consonant-plus-vowel units. In 2 large-scale studies of word pronunciation, the consistency of pronunciation of the orthographic rime accounted for variance in latencies and errors beyond that contributed by the consistency of pronunciation of the individual graphemes and by other factors. In 3 experiments, as well, children and adults made more errors on words with less consistently pronounced orthographic rimes than on words with more consistently pronounced orthographic rimes. Relations between spellings and sounds in the simple monomorphemic words of English are more predictable when the level of onsets and rimes is taken into account than when only graphemes and phonemes are considered. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
In order to evaluate hypotheses regarding production constraints on final consonants in babbling, 721 utterance-final consonants produced by 6 infants in consonant-vowel-consonant (CVC) syllables were examined and compared with the preceding consonant in the CVC. Consistent with earlier studies, major patterns were observed for each of the three main consonantal properties--place and manner of articulation and voicing. These patterns included a strong tendency for final consonants to repeat the place of articulation of nonfinal consonants and a tendency for relatively more fricative, nasal and voiceless consonants to occur in final position than in nonfinal position. The high frequency with which final consonants shared place of articulation with the preceding consonant was considered to reflect 'frame dominance' or the tendency of a relatively constant mandibular cycle (the frame) to determine the structure of utterances with very little contribution from other active articulators. The manner and voicing effects were attributed to an overall terminal energy decrease in the vocal production system.  相似文献   

12.
Three experiments demonstrated that the pattern of changes in articulatory rate in a precursor phrase can affect the perception of voicing in a syllable-initial prestress velar stop consonant. Fast and slow versions of a 10-word precursor phrase were recorded, and sections from each version were combined to produce several precursors with different patterns of change in articulatory rate. Listeners judged the identity of a target syllable, selected from a 7-member /gi/ki/ voice-onset-time (VOT) continuum, that followed each precursor phrase after a variable brief pause. The major results were: (a) articulatory-rate effects were not restricted to the target syllable's immediate context; (b) rate effects depended on the pattern of rate changes in the precursor and not the amount of fast or slow speech or the proximity of fast or slow speech to the target syllable; and (c) shortening of the pause (or closure) duration led to a shortening of VOT boundaries rather than a lengthening as previously found in this phonetic context. Results are explained in terms of the role of dynamic temporal expectancies in determining the response to temporal information in speech, and implications for theories (e.g., C. A. Fowler; see record 1981-07588-001) of extrinsic vs intrinsic timing are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Cochlear implant therapy is an epoch-making advance in artificial sensory organ transplants, but the positive effects on speech perception vary. Quantification theory type I, a multivariate analysis, was used to determine predictive factors for speech perception in patients with cochlear implants. Fifty-one postlingual deaf adults (18 male and 33 female, mean age, 53.4, mean duration of deafness, 8.6 years) were tested for speech perception three or more months after a Nucleus 22 channels cochlear implant. The cause of deafness in nine patients was labyrinthitis, ototoxicity in five, meningitis in three and unknown in the remaining 34. Speech perception was measured by vowel, consonant and word recognition using a live voice, and monosyllable, word and sentence recognition using a videodisc. All tests were administered in a sound only condition. Results of the univariate analysis indicated that age at implantation was correlated with monosyllable recognition, and duration of deafness was correlated with live voice word recognition. Residual hearing and coding strategy were both correlated with all outcome measures. The multivariate analysis revealed that coding strategy, duration of deafness, residual hearing and the number of electrodes were significant predictors of live voice word recognition in that order.  相似文献   

14.
For native speakers of English and several other languages, preceding vocalic duration and F1 offset frequency are two of the cues that convey the stop consonant voicing distinction in word-final position. For speakers learning English as a second language, there are indications that use of vocalic duration, but not F1 offset frequency, may be hindered by a lack of experience with phonemic (i.e., lexical) vowel length (the "phonemic vowel length account": Crowther & Mann, 1992). In this study, native speakers of Arabic, a language that includes a phonemic vowel length distinction, were tested for their use of vocalic duration and F1 offset in production and perception of the English consonant-vowel-consonant forms pod and pot. The phonemic vowel length hypothesis predicts that Arabic speakers should use vocalic duration extensively in production and perception. On the contrary, Experiment 1 revealed that, consistent with Flege and Port's (1981) findings, they produced only slightly (but significantly) longer vocalic segments in their pod tokens. It further indicated that their productions showed a significant variation in F1 offset as a function of final stop voicing. Perceptual sensitivity to vocalic duration and F1 offset as voicing cues was tested in two experiments. In Experiment 2, we employed a factorial combination of these two cues and a finely spaced vocalic duration continuum. Arabic speakers did not appear to be very sensitive to vocalic duration, but they were about as sensitive as native English speakers to F1 offset frequency. In Experiment 3, we employed a one-dimensional continuum of more widely spaced stimuli that varied only vocalic duration. Arabic speakers showed native-English-like sensitivity to vocalic duration. An explanation based on the perceptual anchor theory of context coding (Braida et al., 1984; Macmillan, 1987; Macmillan, Braida, & Goldberg, 1987) and phoneme perception theory (Schouten & Van Hessen, 1992) is offered to reconcile the apparently contradictory perceptual findings. The explanation does not attribute native-English-like voicing perception to the Arabic subjects. The findings in this study call for a modification of the phonemic vowel length hypothesis.  相似文献   

15.
In 4 experiments, preschoolers and kindergartners were asked to pronounce the initial consonants of spoken words. Children performed better on short words, such as bay, than on long words, such as bonus. Words with initial consonant clusters, such as brow, were more difficult for the children than words without initial consonant clusters, such as bar. A consonant cluster at the end of the word did not harm performance. Children did relatively well on words like suppose, for which the word's 1st syllable, /s/, constitutes a correct answer on the initial consonant isolation task. Children did more poorly on words like satin, for which this was not the case. Thus, the linguistic structure of a word affects children's ability to isolate the initial consonant. Implications for the design of phonemic awareness instruction are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Reports an error in "Interactive use of lexical information in speech perception" by Cynthia M. Connine and Charles Clifton (Journal of Experimental Psychology: Human Perception and Performance, 1987[May], Vol 13[2], 291-299). In the aforementioned article, Figures 1 and 2 were inadvertently transposed. The figure on p. 294 is actually Figure 2, and the figure on p. 296 is actually Figure 1. The captions are correct as they stand. (The following abstract of the original article appeared in record 1987-23984-001.) Two experiments are reported that demonstrate contextual effects on identification of speech voicing continua. Experiment 1 demonstrated the infuence of lexical knowledge on identification of ambiguous tokens from word–nonword and nonword–word continua. Reaction times for word and nonword responses showed a word advantage only for ambiguous stimulus tokens (at the category boundary); no word advantage was found for clear stimuli (at the continua endpoints). Experiment 2 demonstrated an effect of a postperceptual variable, monetary payoff, on nonword–nonword continua. Identification responses were influenced by monetary payoff, but reaction times for bias-consistent and bias-inconsistent responses did not differ at the category boundary. An advantage for bias-consistent responses was evident at the continua endpoints. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Three experiments in Serbo-Croatian were conducted on the effects of phonological ambiguity and lexical ambiguity on printed word recognition. Subjects decided rapidly if a printed and a spoken word matched or not. Printed words were either phonologically ambiguous (two possible pronunciations) or unambiguous. If phonologically ambiguous, either both pronunciations were real words or only one was, the other being a nonword. Spoken words were necessarily unambiguous. Half the spoken words were auditorily degraded. In addition, the relative onsets of speech and print were varied. Speed of matching print to speech was slowed by phonological ambiguity, and the effect was amplified when the stimulus was also lexically ambiguous. Auditory degradation did not interact with print ambiguity, suggesting the perception of the spoken word was independent of the printed word. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Although copious research has investigated the role of phonology in reading, little research has investigated the precise nature of the entailed speech representations. The present study examined the similarity of "inner speech" in reading to overt speech. Two lexical decision experiments (in which participants gave speeded word/nonword classifications to letter strings) assessed the effects of implicit variations in vowel and word-initial consonant length. Responses were generally slower for phonetically long stimuli than for phonetically short stimuli, despite equal orthographic lengths. Moreover, the phonetic length effects displayed principled interactions with common factors known to affect lexical decisions, such as word frequency and the similarity of words to nonwords. Both phonetic length effects were stronger among slower readers. The data suggest that acoustic representations activated in silent reading are best characterized as inner speech rather than as abstract phonological codes.  相似文献   

19.
Examined the extent of readers' and prereaders' conscious awareness of the constituents of speech. 35 white middle-class preschool prereaders, kindergarten prereaders, and 1st-grade readers were given 4 tasks. Results indicate that readers outperformed prereaders and that the 2 groups of prereaders did not differ. Readers were better able to embed words in verbal contexts, to segment sentences into words and syllables, to identify the word distinguishing 2 otherwise identical sentences, and to identify words containing particular final syllables. Prereaders sometimes confused syllables and words, and they encountered more difficulty with function words than with contentives. Superior lexical awareness among readers was attributed to their experience with the printed correlates of spoken language. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
In a longitudinal study following prereading kindergartners through first grade, the variables verbal memory, IQ, and speech perception (SP) together predicted 26% of growth in and 42% of the final status of phonological awareness (PA). The correlation between initial status and growth in PA was .51, suggesting that those who begin with high PA develop that skill more quickly than those who begin with lower PA. Although those low and high in SP in kindergarten had substantially different word-decoding scores by the middle of first grade (low: M?=?6.8 words; high: M?=?18.1 words), this difference was no longer significant once phonological processing was controlled, suggesting that the effect of SP on word decoding is mediated by phonological processing ability. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号