首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Simultaneous communications combines both spoken and manual modes to produce each word of an utterance. This study investigated the potential influence of alterations in the temporal structure of speech produced by inexperienced signers during simultaneous communication on the perception of final consonant voicing. Inexperienced signers recorded words that differed only in the voicing characteristic of the final consonant under two conditions: (1) speech alone and (2) simultaneous communication. The words were subsequently digitally edited to remove the final consonant and played to 20 listeners who, in a forced-choice paradigm, circled the word they thought they heard. Results indicated that accurate perception of final consonant voicing was not impaired by changes in the temporal structure of speech that accompanied the inexperienced signers' simultaneous communication.  相似文献   

2.
The purpose of this investigation was to determine whether the production of sibilant sounds involved adopting a jaw position that corresponded to the closest vertical speaking space (CSS), by analysis of the smallest vertical excursion of the mandible during the performance of different phonetic exercises. A further objective was to establish the variability in the CSS produced by individual sibilant phonemes. Thirty young adult subjects had their CSS determined during three separate phonetic tests, using a kinesiograph (Sirognathograph, Siemens A.G., Benshiem, Germany) and a Bio-Pak (BioResearch Associates Inc., Milwaukee, WI) jaw-tracking software program. The first test was a general phonetic articulation test containing all the sounds of the English language and specifically including all six sibilant word sounds. The second phonetic test contained the six sibilant sound making up a short sentence. The third test included six single words, each expressing a different sibilant sound. No statistically significant difference among the mean CSS determined in each of three exercises was demonstrable. A phonetic test containing all sibilant sounds produced a CSS equivalent to that of a test containing all speech sounds. The vertical component of the CSS was also independent of the form or duration of the phonetic tests containing the sibilant word sounds used in this investigation. The CSS determined for 5 of the individual sibilant phonemes in the third exercise differed (p < 0.05) from that calculated for the three complete exercises. It was concluded that voicing sibilant phonemes, or word sounds, does cause the subject to adopt the CSS.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   

3.
Neural encoding of temporal speech features is a key component of acoustic and phonetic analyses. We examined the temporal encoding of the syllables /da/ and /ta/, which differ along the temporally based, phonetic parameter of voice onset time (VOT), in primary auditory cortex (A1) of awake monkeys using concurrent multilaminar recordings of auditory evoked potentials (AEP), the derived current source density, and multiunit activity. A general sequence of A1 activation consisting of a lamina-specific profile of parallel and sequential excitatory and inhibitory processes is described. VOT is encoded in the temporal response patterns of phase-locked activity to the periodic speech segments and by "on" responses to stimulus and voicing onset. A transformation occurs between responses in the thalamocortical (TC) fiber input and A1 cells. TC fibers are more likely to encode VOT with "on" responses to stimulus onset followed by phase-locked responses during the voiced segment, whereas A1 responses are more likely to exhibit transient responses both to stimulus and voicing onset. Relevance to subcortical speech processing, the human AEP and speech psychoacoustics are discussed. A mechanism for categorical differentiation of voiced and unvoiced consonants is proposed.  相似文献   

4.
Three experiments demonstrated that the pattern of changes in articulatory rate in a precursor phrase can affect the perception of voicing in a syllable-initial prestress velar stop consonant. Fast and slow versions of a 10-word precursor phrase were recorded, and sections from each version were combined to produce several precursors with different patterns of change in articulatory rate. Listeners judged the identity of a target syllable, selected from a 7-member /gi/ki/ voice-onset-time (VOT) continuum, that followed each precursor phrase after a variable brief pause. The major results were: (a) articulatory-rate effects were not restricted to the target syllable's immediate context; (b) rate effects depended on the pattern of rate changes in the precursor and not the amount of fast or slow speech or the proximity of fast or slow speech to the target syllable; and (c) shortening of the pause (or closure) duration led to a shortening of VOT boundaries rather than a lengthening as previously found in this phonetic context. Results are explained in terms of the role of dynamic temporal expectancies in determining the response to temporal information in speech, and implications for theories (e.g., C. A. Fowler; see record 1981-07588-001) of extrinsic vs intrinsic timing are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
This investigation examined whether access to sign language as a medium for instruction influences theory of mind (ToM) reasoning in deaf children with similar home language environments. Experiment 1 involved 97 deaf Italian children ages 4-12 years: 56 were from deaf families and had LIS (Italian Sign Language) as their native language, and 41 had acquired LIS as late signers following contact with signers outside their hearing families. Children receiving bimodal/bilingual instruction in LIS together with Sign-Supported and spoken Italian significantly outperformed children in oralist schools in which communication was in Italian and often relied on lipreading. Experiment 2 involved 61 deaf children in Estonia and Sweden ages 6-16 years. On a wide variety of ToM tasks, bilingually instructed native signers in Estonian Sign Language and spoken Estonian succeeded at a level similar to age-matched hearing children. They outperformed bilingually instructed late signers and native signers attending oralist schools. Particularly for native signers, access to sign language in a bilingual environment may facilitate conversational exchanges that promote the expression of ToM by enabling children to monitor others' mental states effectively. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
[Correction Notice: An erratum for this article was reported in Vol 21(3) of Neuropsychology (see record 2007-06185-013). Figure 1 on p. 117 (Stimulus Materials section) depicting sample and match stimuli was incorrect. The labels Object condition and Shape condition should be reversed so that the top row is indicated as the shape condition and the bottom row as the object condition.] Deaf and hearing individuals who either used sign language (signers) or not (nonsigners) were tested on visual memory for objects and shapes that were difficult to describe verbally with a same/different matching paradigm. The use of 4 groups was designed to permit a separation of effects related to sign language use (signers vs. nonsigners) and effects related to auditory deprivation (deaf vs. hearing). Forty deaf native signers and nonsigners and 51 hearing signers and nonsigners participated in the study. Signing individuals (both deaf and hearing) were more accurate than nonsigning individuals (deaf and hearing) at memorizing shapes. For the shape memory task but not the object task, deaf signers and nonsigners displayed right hemisphere (RH) advantage over the left hemisphere (LH). Conversely, both hearing groups displayed a memory advantage for shapes in the LH over the RH. Results indicate that enhanced memory performance for shapes in signers (deaf and hearing) stems from the visual skills acquired through sign language use and that deafness, irrespective of language background, leads to the use of a visually based strategy for memory of difficult-to-describe items. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
8.
Among the contextual factors known to play a role in segmental perception are the rate at which the speech was produced and the lexical status of the item, that is, whether it is a meaningful word of the language. In a series of experiments on the word-initial /b/p/ voicing distinction, we investigated the conditions under which these factors operate during speech processing. The results indicated that under instructions of speeded responding, listeners could, on some trials, ignore some later occurring contextual information within the word that specified rate and lexical status. Importantly, however, they could not ignore speaking rate entirely. Although they could base their decision on only the early portion of the word, when doing so they treated the word as if it were physically short—that is to say, as if there were no later occurring information specifying a slower rate. This suggests that listeners always take account of rate when identifying the voicing value of a consonant, but precisely which information within the word is used to specify rate can vary with task demands. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Reports an error in "Visual Memory for Shapes in Deaf Signers and Nonsigners and in Hearing Signers and Nonsigners: Atypical Lateralization and Enhancement" by Allegra Cattani, John Clibbens and Timothy J. Perfect (Neuropsychology, 2007[Jan], Vol 21[1], 114-121). Figure 1 on p. 117 (Stimulus Materials section) depicting sample and match stimuli was incorrect. The labels Object condition and Shape condition should be reversed so that the top row is indicated as the shape condition and the bottom row as the object condition. (The following abstract of the original article appeared in record 2006-23022-010.) Deaf and hearing individuals who either used sign language (signers) or not (nonsigners) were tested on visual memory for objects and shapes that were difficult to describe verbally with a same/different matching paradigm. The use of 4 groups was designed to permit a separation of effects related to sign language use (signers vs. nonsigners) and effects related to auditory deprivation (deaf vs. hearing). Forty deaf native signers and nonsigners and 51 hearing signers and nonsigners participated in the study. Signing individuals (both deaf and hearing) were more accurate than nonsigning individuals (deaf and hearing) at memorizing shapes. For the shape memory task but not the object task, deaf signers and nonsigners displayed right hemisphere (RH) advantage over the left hemisphere (LH). Conversely, both hearing groups displayed a memory advantage for shapes in the LH over the RH. Results indicate that enhanced memory performance for shapes in signers (deaf and hearing) stems from the visual skills acquired through sign language use and that deafness, irrespective of language background, leads to the use of a visually based strategy for memory of difficult-to-describe items. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
How do infants learn the sound patterns of their native language? By the end of the 1st year, infants have acquired detailed aspects of the phonology and phonotactics of their input language. However, the structure of the learning mechanisms underlying this process is largely unknown. In this study, 9-month-old infants were given the opportunity to induce specific phonological patterns in 3 experiments in which syllable structure, consonant voicing position, and segmental position were manipulated. Infants were then familiarized with fluent speech containing words that either fit or violated these patterns. Subsequent testing revealed that infants rapidly extracted new phonological regularities and that this process was constrained such that some regularities were easier to acquire than others. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Young and older adults provided language samples in response to questions while walking, finger tapping, and ignoring speech or noise. The language samples were scored on 3 dimensions: fluency, complexity, and content. The hypothesis that working memory limitations affect speech production by older adults was tested by comparing baseline samples with those produced while the participants were performing the concurrent tasks. There were baseline differences: Older adults' speech was less fluent and less complex than young adults' speech. Young adults adopted a different strategy in response to the dual-task demands than older adults: They reduced sentence length and grammatical complexity. In contrast, older adults shifted to a reduced speech rate in the dual-task conditions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
13.
French-speaking hearing and deaf children, ranging in age from 6 years 10 months to 14 years 7 months were required to spell words including phoneme-to-grapheme correspondences that were either statistically dominant or nondominant. Of interest was whether the nature of linguistic experience (cued speech vs. sign language) and the precocity of such experience (early vs. late exposure) determines accuracy in the use of phoneme-to-grapheme knowledge. Cued speech is a system delivering phonemically augmented speechreading through the visual modality. Hearing and deaf children exposed to cued speech early at home relied on accurate phoneme-to-grapheme correspondences, whereas children exposed to cued speech later and at school only, and children exposed to sign language, did not. A critical factor in the development of the phonological route for spelling seems to be early and intensive exposure to a system making all phonological distinctions easily perceivable. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
In this study changes in upper lip and lower lip integrated electromyographic (IEMG) amplitude and temporal measures related to linguistic factors known for their influence on stuttering were investigated. Nonstuttering subjects first read and then verbalized sentences of varying length (sentence length factor), in which meaningless but phonologically appropriate character strings were varied in their position within the sentence (word position factor) and their size (word size factor). It was hypothesized that the production of stressed, vowel-rounding gestures of words in initial position, longer words, and words in longer sentences would be characterized by specific changes in IEMG amplitude that would reflect an increase in speech motor demands, intuitively defined as articulatory effort. Basically, the findings corroborated our assumptions, showing that words in sentence initial position have shorter word and vowel durations in combination with an increase in IEMG activity. Similarly, we found shorter vowel durations for longer words, and in sentence final position an increase in IEMG activity. For longer sentences we found a clear increase in speech rate, but contrary to our expectations a decrease in IEMG activity. It was speculated that this might relate to the use of a movement reduction strategy to allow higher speech rates with increased coarticulation. These findings were discussed both for their implications in normal speech production, as well as for their possible implications for explaining stuttering behavior. To this end our data can illustrate both why stutterers might run a higher risk of stuttering at these linguistic loci of stuttering, and why they might come up with a strategic solution to decrease the motor demands in speech production. The basic outcome of this study shows that higher order (linguistic) specifications can have clear effects on speech motor production.  相似文献   

15.
Grammatical properties are found in conventional sign languages of the deaf and in unconventional gesture systems created by deaf children lacking language models. However, they do not arise in spontaneous gestures produced along with speech. The authors propose a model explaining when the manual modality will assume grammatical properties and when it will not. The model argues that two grammatical features, segmentation and hierarchical combination, appear in all settings in which one human communicates symbolically with another. These properties are preferentially assumed by speech whenever words are spoken, constraining the manual modality to a global form. However, when the manual modality must carry the full burden of communication, it is freed from the global form it assumes when integrated with speech—only to be constrained by the task of symbolic communication to take on the grammatical properties of segmentation and hierarchical combination. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
For native speakers of English and several other languages, preceding vocalic duration and F1 offset frequency are two of the cues that convey the stop consonant voicing distinction in word-final position. For speakers learning English as a second language, there are indications that use of vocalic duration, but not F1 offset frequency, may be hindered by a lack of experience with phonemic (i.e., lexical) vowel length (the "phonemic vowel length account": Crowther & Mann, 1992). In this study, native speakers of Arabic, a language that includes a phonemic vowel length distinction, were tested for their use of vocalic duration and F1 offset in production and perception of the English consonant-vowel-consonant forms pod and pot. The phonemic vowel length hypothesis predicts that Arabic speakers should use vocalic duration extensively in production and perception. On the contrary, Experiment 1 revealed that, consistent with Flege and Port's (1981) findings, they produced only slightly (but significantly) longer vocalic segments in their pod tokens. It further indicated that their productions showed a significant variation in F1 offset as a function of final stop voicing. Perceptual sensitivity to vocalic duration and F1 offset as voicing cues was tested in two experiments. In Experiment 2, we employed a factorial combination of these two cues and a finely spaced vocalic duration continuum. Arabic speakers did not appear to be very sensitive to vocalic duration, but they were about as sensitive as native English speakers to F1 offset frequency. In Experiment 3, we employed a one-dimensional continuum of more widely spaced stimuli that varied only vocalic duration. Arabic speakers showed native-English-like sensitivity to vocalic duration. An explanation based on the perceptual anchor theory of context coding (Braida et al., 1984; Macmillan, 1987; Macmillan, Braida, & Goldberg, 1987) and phoneme perception theory (Schouten & Van Hessen, 1992) is offered to reconcile the apparently contradictory perceptual findings. The explanation does not attribute native-English-like voicing perception to the Arabic subjects. The findings in this study call for a modification of the phonemic vowel length hypothesis.  相似文献   

17.
Five experiments monitored eye movements in phoneme and lexical identification tasks to examine the effect of within-category subphonetic variation on the perception of stop consonants. Experiment 1 demonstrated gradient effects along voice-onset time (VOT) continua made from natural speech, replicating results with synthetic speech (B. McMurray, M. K. Tanenhaus, & R. N. Aslin, 2002). Experiments 2-5 used synthetic VOT continua to examine effects of response alternatives (2 vs. 4), task (lexical vs. phoneme decision), and type of token (word vs. consonant-vowel). A gradient effect of VOT in at least one half of the continuum was observed in all conditions. These results suggest that during online spoken word recognition, lexical competitors are activated in proportion to their continuous distance from a category boundary. This gradient processing may allow listeners to anticipate upcoming acoustic-phonetic information in the speech signal and dynamically compensate for acoustic variability. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Examined the influence of prior training and linguistic experience on the perception of nonnative speech in 2 experiments. Exp I assessed the effect of laboratory training on the ability of 30 English-speaking adults (aged 18–35 yrs) to discriminate 2 speech contrasts that are used to contrast meaning in Hindi but not in English. Short-term training resulted in an amelioration of the initial poor performance of Ss in discriminating a nonnative voicing contrast, but training had no such effect in the case of a Hindi contrast involving a place of articulation distinction. In Exp II, the performance of 3 groups of English-speaking adults (aged 20–38 yrs)—Ss who had studied Hindi for 5 yrs or more, Ss who were studying Hindi as a 2nd language with early experience of Hindi, and Ss studying Hindi as a 2nd language with no early experience of Hindi—was examined to investigate the effect of studying Hindi as a 2nd language for different periods. Ss who had studied Hindi for at least 5 yrs discriminated both Hindi speech contrasts. While 1 yr of 2nd language experience also improved performance of Ss with no early Hindi experience on the voicing contrast, it had little influence on their ability to discriminate the Hindi place contrast. Ss who had early experience hearing the contrasts being used, but no further exposure, could discriminate both the voicing and place distinctions prior to language study. Findings are discussed in terms of the recovery and maintenance of linguistic perceptual ability. (French abstract) (26 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Examined the relationship between ability to discriminate and identify 3 synthetic speech continua representing vowel, voicing, and place contrasts and level of auditory language comprehension in 19 41–68 yr old male aphasics and 8 42–65 yr old non-brain-damaged controls. Aphasics were assigned to a good or a moderate comprehension group (GCG and MCG) on the basis of their auditory language comprehension scores. Both groups of aphasics performed the same as controls on the vowel contrasts. In comparison with controls, aphasics in the MCG had difficulty perceiving voicing and place contrasts, whereas aphasics in the GCG had difficulty perceiving place contrasts only. The MCG had more difficulty than the other 2 groups in discriminating stimuli at the inner boundary of each phoneme category. They were also most impaired in identifying the endpoints of the place and voicing continua. Place contrasts were more difficult to identify than voicing contrasts for both aphasic groups. In terms of the relationship between discrimination and identification, the majority of Ss either discriminated and identified the stimuli at equal levels of performance, or discriminated but did not identify the stimuli. Results indicate that auditory language comprehension predicts, to some extent, perception of voicing and place contrasts. (French abstract) (20 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Caregivers of patients diagnosed with Alzheimer's disease (AD) are often advised to modify their speech to facilitate the patients' sentence comprehension. Three common recommendations are to (a) speak in simple sentences, (b) speak slowly, and (c) repeat one's utterance, using the same words. These three speech modifications were experimentally manipulated in order to investigate their individual and combined effects on sentence comprehension in AD. Fifteen patients with mild to moderate AD and 20 healthy older persons were tested on a sentence comprehension task with sentences varying in terms of (a) degree of grammatical complexity, (b) rate of presentation (normal vs. slow), and (c) form of repetition (verbatim vs. paraphrase). The results indicated a significant decline in sentence comprehension for the AD group. Sentence comprehension improved, however, after the sentence was repeated in either verbatim or paraphrased form. However, the patients' comprehension did not improve for sentences presented at the slow speech rate. This pattern of results is explained vis-à-vis the patients' working memory loss. The findings challenge the appropriateness of several clinical recommendations. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号