首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 742 毫秒
1.
The purpose of this study was to compare the intelligibility of two types of alaryngeal speech commonly used after total laryngectomy. Four male oesophageal speakers and four male tracheo-oesophageal speakers read a series of monosyllabic words, multisyllabic words and sentences. The monosyllabic word list consisted of several minimal pairs for each of eight phonetic contrasts; multisyllabic words and sentences were not selected on specific phonetic grounds. Audio recordings of all subjects' readings were presented to eight na?ve adult listeners who completed both an item identification task and a scaling procedure. The item identification task revealed higher intelligibility fpr tracheo-oesophageal speakers than for oesophageal speakers during the monosyllabic word condition. Results from the scaling procedure indicated that listeners' subjective intelligibility ratings were also higher for the tracheo-oesophageal speakers than for the oesophageal speakers. Moreover, a high positive correlation was found between the speakers' intelligibility scores obtained from the word identification task and the scaling procedure.  相似文献   

2.
Tracheoesophageal (TE) speech is now the most common method of voice rehabilitation after total laryngectomy. The speech intelligibility of laryngectomees who use TE speech as their primary mode of communication was evaluated by 20 "naive" listeners. Two speech intelligibility tests were administered using phonetically balanced rhyming words or lists of spondee words. The overall intelligibility for the group of laryngectomees was 76%, with a wide range of variability among the individual TE speakers. We concluded that TE speech is significantly less intelligible to naive listeners than normal laryngeal speech; further refinement of voice rehabilitation for laryngectomees is needed.  相似文献   

3.
This study examined changes in the sentence intelligibility scores of speakers with dysarthria in association with different signal-independent factors (contextual influences). This investigation focused on the presence or absence of iconic gestures while speaking sentences with low or high semantic predictiveness. The speakers were 4 individuals with dysarthria, who varied from one another in terms of their level of speech intelligibility impairment, gestural abilities, and overall level of motor functioning. Ninety-six inexperienced listeners (24 assigned to each speaker) orthographically transcribed 16 test sentences presented in an audio + video or audio-only format. The sentences had either low or high semantic predictiveness and were spoken by each speaker with and without the corresponding gestures. The effects of signal-independent factors (presence or absence of iconic gestures, low or high semantic predictiveness, and audio + video or audio-only presentation formats) were analyzed for individual speakers. Not all signal-independent information benefited speakers similarly. Results indicated that use of gestures and high semantic predictiveness improved sentence intelligibility for 2 speakers. The other 2 speakers benefited from high predictive messages. The audio + video presentation mode enhanced listener understanding for all speakers, although there were interactions related to specific speaking situations. Overall, the contributions of relevant signal-independent information were greater for the speakers with more severely impaired intelligibility. The results are discussed in terms of understanding the contribution of signal-independent factors to the communicative process.  相似文献   

4.
The contribution of reduced speaking rate to the intelligibility of "clear" speech (Picheny, Durlach, & Braida, 1985) was evaluated by adjusting the durations of speech segments (a) via nonuniform signal time-scaling, (b) by deleting and inserting pauses, and (c) by eliciting materials from a professional speaker at a wide range of speaking rates. Key words in clearly spoken nonsense sentences were substantially more intelligible than those spoken conversationally (15 points) when presented in quiet for listeners with sensorineural impairments and when presented in a noise background to listeners with normal hearing. Repeated presentation of conversational materials also improved scores (6 points). However, degradations introduced by segment-by-segment time-scaling rendered this time-scaling technique problematic as a means of converting speaking styles. Scores for key words excised from these materials and presented in isolation generally exhibited the same trends as in sentence contexts. Manipulation of pause structure reduced scores both when additional pauses were introduced into conversational sentences and when pauses were deleted from clear sentences. Key-word scores for materials produced by a professional talker were inversely correlated with speaking rate, but conversational rate scores did not approach those of clear speech for other talkers. In all experiments, listeners with normal hearing exposed to flat-spectrum background noise performed similarly to listeners with hearing loss.  相似文献   

5.
Normal-hearing and hearing-impaired listeners were tested to determine F0 difference limens for synthetic tokens of 5 steady-state vowels. The same stimuli were then used in a concurrent-vowel labeling task with the F0 difference between concurrent vowels ranging between 0 and 4 semitones. Finally, speech recognition was tested for synthetic sentences in the presence of a competing synthetic voice with the same, a higher, or a lower F0. Normal-hearing listeners and hearing-impaired listeners with small F0-discrimination (deltaF0) thresholds showed improvements in vowel labeling when there were differences in F0 between vowels on the concurrent-vowel task. Impaired listeners with high deltaF0 thresholds did not benefit from F0 differences between vowels. At the group level, normal-hearing listeners benefited more than hearing-impaired listeners from F0 differences between competing signals on both the concurrent-vowel and sentence tasks. However, for individual listeners, deltaF0 thresholds and improvements in concurrent-vowel labeling based on F0 differences were only weakly associated with F0-based improvements in performance on the sentence task. For both the concurrent-vowel and sentence tasks, there was evidence that the ability to benefit from F0 differences between competing signals decreases with age.  相似文献   

6.
HINT list equivalency was examined using 24 listeners between 60 and 70 years old who had sensorineural hearing impairment. A Greco-Latin square design was used to ensure that each list was presented an equal number of times per condition. Four conditions were tested: (1) speech in quiet, (2) speech in 65 dBA noise with noise at 0 degrees azimuth, (3) speech in 65 dBA noise with noise at 90 degrees azimuth, and (4) speech in 65 dBA noise with noise at 270 degrees azimuth. Speech materials were always presented at 0 degrees azimuth. Overall mean scores ranged from 29.9 dBA for the quiet condition to 63.4 dBA for the noise at 0 degrees azimuth condition. A significant difference was found between Lists 13 and 16 only. This was attributed to audibility differences among the listeners. Therefore, the 25 HINT lists should be considered equivalent for older populations with similar hearing impairment. The HINT lists can be used for relative measures, such as comparison of aided versus unaided sentence SRTs or comparison of 2 different hearing aids.  相似文献   

7.
The present study was a systematic investigation of the benefit of providing hearing-impaired listeners with audible high-frequency speech information. Five normal-hearing and nine high-frequency hearing-impaired listeners identified nonsense syllables that were low-pass filtered at a number of cutoff frequencies. As a means of quantifying audibility for each condition, Articulation Index (AI) was calculated for each condition for each listener. Most hearing-impaired listeners demonstrated an improvement in speech recognition as additional audible high-frequency information was provided. In some cases for more severely impaired listeners, increasing the audibility of high-frequency speech information resulted in no further improvement in speech recognition, or even decreases in speech recognition. A new measure of how well hearing-impaired listeners used information within specific frequency bands called "efficiency" was devised. This measure compared the benefit of providing a given increase in speech audibility to a hearing-impaired listener to the benefit observed in normal-hearing listeners for the same increase in speech audibility. Efficiencies were calculated using the old AI method and the new AI method (which takes into account the effects of high speech presentation levels). There was a clear pattern in the results suggesting that as the degree of hearing loss at a given frequency increased beyond 55 dB HL, the efficacy of providing additional audibility to that frequency region was diminished, especially when this degree of hearing loss was present at frequencies of 4000 Hz and above. A comparison of analyses from the "old" and "new" AI procedures suggests that some, but not all, of the deficiencies of speech recognition in these listeners was due to high presentation levels.  相似文献   

8.
A prevailing complaint among individuals with sensorineural hearing loss (SNHL) is difficulty understanding speech, particularly under adverse listening conditions. The present investigation compared the speech-recognition abilities of listeners with mild to moderate degrees of SNHL to normal-hearing individuals with simulated hearing impairments, accomplished using spectrally shaped masking noise. Speech-perception ability was assessed using the predictability-high sentences from the Speech Perception in Noise test. Results revealed significant differences between groups in sentential-recognition ability, with the hearing-impaired subjects performing poorer than the masked-normal listeners. These findings suggest the presence of a secondary distortion degrading sentential-recognition ability in the hearing impaired, implications of these data will be discussed concerning the mechanism(s) responsible for speech perception in the hearing impaired.  相似文献   

9.
Young normal-hearing listeners and young-elderly listeners between 55 and 65 years of age, ranging from near-normal hearing to moderate hearing loss, were compared using different speech recognition tasks (consonant recognition in quiet and in noise, and time-compressed sentences) and working memory tasks (serial word recall and digit ordering). The results showed that the group of young-elderly listeners performed worse on both the speech recognition and working memory tasks than the young listeners. However, when pure-tone audiometric thresholds were used as a covariate variable, the significant differences between groups disappeared. These results support the hypothesis that sensory decline in young-elderly listeners seems to be an important factor in explaining the decrease in speech processing and working memory capacity observed at these ages. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Although it is well known that the speech produced by the deaf is generally of low intelligibility, the sources of this low speech intelligibility have generally been ascribed either to aberrant articulation of phonemes or inappropriate prosody. This study was designed to determine to what extent a nonsegmental aspect of speech, formant transitions, may differ in the speech of the deaf and of the normal hearing. The initial second formant transitions of the vowels /i/ and /u/ after labial and alveolar consonants (/b, d, f/) were compared in the speech of six normal-hearing and six hearing-impaired adolescents. In the speech of the hearing-impaired subjects, the second formant transitions may be reduced both in time and in frequency. At its onset, the second formant may be nearer to its eventual target frequency than in the speech of the normal subjects. Since formant transitions are important acoustic cues for the adjacent consonants, reduced F2 transitions may be an important factor in the low intelligibility of the speech of the deaf.  相似文献   

11.
Three experiments were conducted to investigate the effects of variations in talker characteristics, speaking rate, and overall amplitude on perceptual identification in normal-hearing young (NHY), normal-hearing elderly (NHE), and hearing-impaired elderly (HIE) listeners. The three dimensions were selected because variations in voice characteristics and speaking rate affect features of speech signals that are important for word recognition while overall amplitude changes do not alter stimulus parameters that have direct effects on phonetic identification. Thus, the studies were designed to examine how variations in both phonetically relevant and irrelevant stimulus dimensions affect speech processing in a number of different populations. Age differences, as indicated by greater effects of variability for the NHE compared with the NHY listeners, were observed for mixed-talker and mixed-amplitude word lists. Effects of age-related hearing impairment, as indicated by reduced scores for the HIE compared with the NHE group, were observed for variations in speaking rate and talker characteristics. Considered together, the findings suggest that age-related changes in perceptual normalization and selective attention may contribute to the reduced speech understanding that is often reported for older adults.  相似文献   

12.
Speech remains intelligible despite the elimination of canonical acoustic correlates of phonemes from the spectrum. A portion of this perceptual flexibility can be attributed to modulation sensitivity in the auditory-to-phonetic projection, although signal-independent properties of lexical neighborhoods also affect intelligibility in utterances composed of words. Three tests were conducted to estimate the effects of exposure to natural and sine-wave samples of speech in this kind of perceptual versatility. First, sine-wave versions of the easy and hard word sets were created, modeled on the speech samples of a single talker. The performance difference in recognition of easy and hard words was used to index the perceptual reliance on signal-independent properties of lexical contrasts. Second, several kinds of exposure produced familiarity with an aspect of sine-wave speech: (a) sine-wave sentences modeled on the same talker; (b) sine-wave sentences modeled on a different talker, to create familiarity with a sine-wave carrier; and (c) natural sentences spoken by the same talker, to create familiarity with the idiolect expressed in the sine-wave words. Recognition performance with both easy and hard sine-wave words improved after exposure only to sine-wave sentences modeled on the same talker. Third, a control test showed that signal-independent uncertainty is a plausible cause of differences in recognition of easy and hard sine-wave words. The conditions of beneficial exposure reveal the specificity of attention underlying versatility in speech perception. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

13.
Noise exposure measurements were performed with pilots of the German Federal Navy during flight situations. The ambient noise levels during regular flight were maintained at levels above a 90 dB A-weighted level. This noise intensity requires wearing ear protection to avoid sound-induced hearing loss. To be able to understand radio communication (ATC) in spite of a noisy environment, headphone volume must be raised above the noise of the engines. The use of ear plugs in addition to the headsets and flight helmets is only of limited value because personal ear protection affects the intelligibility of ATC. Whereas speech intelligibility of pilots with normal hearing is affected to only a smaller degree, pilots with pre-existing high-frequency hearing losses show substantial impairments of speech intelligibility that vary in proportion to the hearing deficit present. Communication abilities can be reduced drastically, which in turn can affect air traffic security. The development of active noise compensation devices (ANC) that make use of the "anti-noise" principle may be a solution to this dilemma. To evaluate the effectiveness of an ANC-system and its influence on speech intelligibility, speech audiometry was performed with a German standardized test during simulated flight conditions with helicopter pilots. Results demonstrate the helpful effect on speech understanding especially for pilots with noise-induced hearing losses. This may help to avoid pre-retirement professional disability.  相似文献   

14.
Previous studies have shown that bizarre and common images produce equivalent levels of recall in unmixed-list designs. Using unmixed lists, we tested the view that bizarre images would be less susceptible than common images to common sources of interference. In all experiments, subjects imaged a list of either bizarre or common sentences and then performed some kind of interfering task before recalling the initial list of sentences. Experiment 1 showed that bizarre images were better accessed than common images after imaging an intervening list of common sentences. Also, components of common images tended to be better recalled than those of bizarre images after imaging an intervening list of bizarre sentences. Experiments 2a and 2b showed that interfering tasks consisting of studying lists of common concrete nouns did not differentially affect memory for bizarre and common images. In Experiment 3, labeling and imaging an interfering list of common pictures produced higher recall of bizarre images. Generally, bizarre images appeared to be less susceptible than common images to interference from certain types of common encodings. Importantly, the superior recall of bizarre images was always due to greater image (sentence) access, whereas higher recall of common images was associated with greater recovery of the image (sentence) constituents. Explanation of the precise pattern of results requires consideration of the distinctive properties of bizarre images. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Examined the development of an awareness (metamemory) of "constructive interference." This is the "fact" that when children and adults are presented a list of semantically related sentences, they later find it more difficult to distinguish old from new instances than when they are presented a list of unrelated sentences. Knowledge of this constructive interference was tested by having 192 11-, 15- and 22-yr-old students first predict recognition and then take an actual recognition test. In independent groups, half of the Ss received lists of semantically related sentences, and half received lists of semantically unrelated sentences. By comparing Ss' predictions with their actual performances across the different groups, it appears that the 11-yr-olds did not comprehend this phenomenon, but the 15- and 22-yr-olds did. That is, older Ss correctly predicted that recognition performance would be poorer for related lists than for unrelated lists. The 11-yr-olds, by contrast, predicted that recognition would be about the same for the 2 kinds of lists. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
17.
A solution to the following problem is presented: Obtain a principled approach to studying error patterns in sentence-length responses obtained from subjects who were instructed to simply report what a talker had said. The solution is a sequence comparator that performs phoneme-to-phoneme alignment on transcribed stimulus and response sentences. Data for developing and testing the sequence comparator were obtained from 139 normal-hearing subjects who lipread (speechread) 100 sentences and from 15 different subjects who identified nonsense syllables by lipreading. Development of the sequence comparator involved testing two different costs metrics (visemes versus Euclidean distances) and two related comparison algorithms. After alignments with face validity were achieved, a validation experiment was conducted for which measures from random versus true stimulus-response sentence pairs were compared. Measures of phonemes correct and substitution uncertainty were found to be sensitive to the nature of the sentence pairs. In particular, correct phoneme matches were extremely rare in random pairings in comparison with true pairs. Also, an information-theoretic measure of uncertainty for substitutions in true versus random pairings showed that uncertainty was always higher for random than for true pairs.  相似文献   

18.
The judgment of annoyance of distorted speech differs radically for different language groups. The results show that those who do comprehend a spoken language, base their annoyance-judgments on the informational content extracted while those who do not base it on the perceptual characteristics of meaningless sound (particularly loudness). A series of distorted German speech sounds were presented to two subject groups consisting of native Swedish and English speakers, and the results were compared with earlier results from groups of native German and Polish subjects. The 50 stimuli were generated from the very same speech signal distorted in two principle ways, either with repeated silent gaps or superimposed noise impulses. The perceived annoyance of the distorted speech was judged both by category scaling for all subject groups, and as a control for "ceiling" effects, also by magnitude estimation for the Swedish and the English subjects. There is a pronounced tendency for German subjects to judge the German speech distorted with silent gaps as more annoying than that distorted with superimposed noise impulses. In contrast, the Swedish, English, and Polish subjects judged the two German-speech distortions in reversed order with regard to annoyance. Thus for noncomprehending listeners, noise-distorted speech is more annoying but for comprehending listeners it is speech distorted by gaps. This means that impaired communication intrusiveness rather than loudness predominates in annoyance judgments from comprehending listeners.  相似文献   

19.
Recent investigations of time-altered speech have dealt with the effect of time compression and sensation level on intelligibility scores of native speaker/listeners of English. In the present investigation, the intelligibility of time-compressed consonant-nucleus-consonant monosyllables was studied using English speaker/listeners whose native languages are Spanish or Indo-Dravidian. Results supported earlier findings in that intelligibility decreased as a function of increasing percentage of time compression and decreasing sensation level. This effect was more prominent for the Indo-Dravidian than for Spanish speaker/listeners. The Spanish group of subjects showed generally lower difference scores than did the Indo-Dravidian group when compared to native English speaker/listeners.  相似文献   

20.
Microphone arrays can improve speech recognition in the noise for hearing-impaired listeners by suppressing interference coming from other than desired signal direction. In a previous paper [J. M. Kates and M. R. Weiss, J. Acoust. Soc. Am. 99, 3138-3148 (1996)], several array-processing techniques were evaluated in two rooms using the AI-weighted array gain as the performance metric. The array consisted of five omnidirectional microphones having uniform 2.5-cm spacing, oriented in the endfire direction. In this paper, the speech intelligibility for two of the array processing techniques, delay-and-sum beamforming and superdirective processing, is evaluated for a group of hearing-impaired subjects. Speech intelligibility was measured using the speech reception threshold (SRT) for spondees and speech intelligibility rating (SIR) for sentence materials. The array performance is compared with that for a single omnidirectional microphone and a single directional microphone having a cardioid response pattern. The SRT and SIR results show that the superdirective array processing was the most effective, followed by the cardioid microphone, the array using delay-and-sum beamforming, and the single omnidirectional microphone. The relative processing ratings do not appear to be strongly affected by the size of the room, and the SRT values determined using isolated spondees are similar to the SIR values produced from continuous discourse.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号