首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The present study was a systematic investigation of the benefit of providing hearing-impaired listeners with audible high-frequency speech information. Five normal-hearing and nine high-frequency hearing-impaired listeners identified nonsense syllables that were low-pass filtered at a number of cutoff frequencies. As a means of quantifying audibility for each condition, Articulation Index (AI) was calculated for each condition for each listener. Most hearing-impaired listeners demonstrated an improvement in speech recognition as additional audible high-frequency information was provided. In some cases for more severely impaired listeners, increasing the audibility of high-frequency speech information resulted in no further improvement in speech recognition, or even decreases in speech recognition. A new measure of how well hearing-impaired listeners used information within specific frequency bands called "efficiency" was devised. This measure compared the benefit of providing a given increase in speech audibility to a hearing-impaired listener to the benefit observed in normal-hearing listeners for the same increase in speech audibility. Efficiencies were calculated using the old AI method and the new AI method (which takes into account the effects of high speech presentation levels). There was a clear pattern in the results suggesting that as the degree of hearing loss at a given frequency increased beyond 55 dB HL, the efficacy of providing additional audibility to that frequency region was diminished, especially when this degree of hearing loss was present at frequencies of 4000 Hz and above. A comparison of analyses from the "old" and "new" AI procedures suggests that some, but not all, of the deficiencies of speech recognition in these listeners was due to high presentation levels.  相似文献   

2.
Auditory perception with hearing protectors was assessed in three groups of subjects, two with normal hearing, but differing in age, and one with moderate bilateral sensorineural hearing loss. Individuals were tested with the ears unoccluded, and fitted with each of two level-dependent ear muffs and their conventional level-independent counterparts. One of the former devices provided limited amplification. In each of these five ear conditions, the threshold of audibility for one-third octave noise bands centered at 500, 1,000, 2,000 and 4,000 Hz, consonant discrimination, and word recognition were measured in quiet and in a continuous impulse noise background. The results showed that the attenuation of sounds (i.e. the difference between protected and unoccluded thresholds) in quiet did not vary as a function of age or hearing loss for any of the four protectors. In noise, the difference between protected and unoccluded listening was close to zero, as long as hearing was normal. With hearing loss as a factor, there was a significant increment in the protected threshold, the amount determined by the device. Word recognition in quiet was adversely affected in normal-hearing listeners by the three attenuating devices but improved in noise relative to unoccluded listening. Amplification had a deleterious effect for both consonant discrimination and word recognition in noise. In hearing-impaired listeners, speech perception was impeded by all four muffs but less so in quiet with limited amplification.  相似文献   

3.
People with cochlear hearing loss have markedly higher speech-receptions thresholds (SRTs) than normal for speech presented in background sounds with spectral and/or temporal dips. This article examines the extent to which SRTs can be improved by linear amplification with appropriate frequency-response shaping, and by fast-acting wide-dynamic-range compression amplification with one, two, four, or eight channels. Eighteen elderly subjects with moderate to severe hearing loss were tested. SRTs for sentences were measured for four background sounds, presented at a nominal level (prior to amplification) of 65 dB SPL: (1) A single female talker, digitally filtered so that the long-term average spectrum matched that of the target speech; (2) a noise with the same average spectrum as the target speech, but with the temporal envelope of the single talker; (3) a noise with the same overall spectral shape as the target speech, but filtered so as to have 4 equivalent-rectangular-bandwidth (ERB) wide spectral notches at several frequencies; (4) a noise with both spectral and temporal dips obtained by applying the temporal envelope of a single talker to speech-shaped noise [as in (2)] and then filtering that noise [as in (3)]. Mean SRTs were 5-6 dB lower (better) in all of the conditions with amplification than for unaided listening. SRTs were significantly lower for the systems with one-, four-, and eight-channel compression than for linear amplification, although the benefit, averaged across subjects, was typically only 0.5 to 0.9 dB. The lowest mean SRT (-9.9 dB, expressed as a speech-to-background ratio) was obtained for noise (4) and the system with eight-channel compression. This is about 6 dB worse than for elderly subjects with near-normal hearing, when tested without amplification. It is concluded that amplification, and especially fast-acting compression amplification, can improve the ability to understand speech in background sounds with spectral and temporal dips, but it does not restore performance to normal.  相似文献   

4.
This study investigated the hypothesis that age effects exert an increased influence on speech recognition performance as the number of acoustic degradations of the speech signal increases. Four groups participated: young listeners with normal hearing, elderly listeners with normal hearing, young listeners with hearing loss, and elderly listeners with hearing loss. Recognition was assessed for sentence materials degraded by noise, reverberation, or time compression, either in isolation or in binary combinations. Performance scores were converted to an equivalent signal-to-noise ratio index to facilitate direct comparison of the effects of different forms of stimulus degradation. Age effects were observed primarily in multiple degradation conditions featuring time compression of the stimuli. These results are discussed in terms of a postulated change in functional signal-to-noise ratio with increasing age.  相似文献   

5.
The first experiment investigated the effects of mild to moderate sensorineural hearing impairment on temporal analysis for noise stimuli of varying bandwidth. Tasks of temporal gap detection, amplitude modulation (AM) detection, and AM discrimination were examined. Relatively high levels of stimulation were used in order to reduce the possibility that the results of the listeners with hearing impairment would be influenced strongly by audibility. A general summary of results was that there was relatively great interlistener variation among the listeners with hearing impairment, with most listeners showing normal performance and some showing degraded performance, regardless of the bandwidth of the stimulus carrying the temporal information. A second experiment investigated the hypothesis that listeners with sensorineural hearing impairment might have poor gap detection due to loudness recruitment. Here, gap markers were presented at levels where loudness growth was steeper for the listeners with hearing impairment than for the listeners with normal hearing. Although gap detection was sometimes poorer in listeners with hearing impairment than in listeners with normal hearing, there was no clear relation between gap detection performance and loudness recruitment in listeners with mild to moderate sensorineural hearing impairment.  相似文献   

6.
Sinusoidal modeling is a new procedure for representing the speech signal. In this approach, the signal is divided into overlapping segments, the Fourier transform computed for each segment, and a set of desired spectral peaks is identified. The speech is then resynthesized using sinusoids that have the frequency, amplitude, and phase of the selected peaks, with the remaining spectral information being discarded. Using a limited number of sinusoids to reproduce speech in a background of multi-talker speech babble results in a speech signal that has an improved signal-to-noise ratio and enhanced spectral contrast. The more intense spectral components, assumed to be primarily the desired speech, are reproduced, whereas the less intense components, assumed to be primarily background noise, are not. To test the effectiveness of this processing approach as a noise suppression technique, both consonant recognition and perceived speech intelligibility were determined in quiet and in noise for a group of subjects with normal hearing as the number of sinusoids used to represent isolated speech tokens was varied. The results show that reducing the number of sinusoids used to represent the speech causes reduced consonant recognition and perceived intelligibility both in quiet and in noise, and suggests that similar results would be expected for listeners with hearing impairments.  相似文献   

7.
Current multichannel cochlear implant devices provide high levels of speech performance in quiet. However, performance deteriorates rapidly with increasing levels of background noise. The goal of this study was to investigate whether the noise susceptibility of cochlear implant users is primarily due to the loss of fine spectral information. Recognition of vowels and consonants was measured as a function of signal-to-noise ratio in four normal-hearing listeners in conditions simulating cochlear implants with both CIS and SPEAK-like strategies. Six conditions were evaluated: 3-, 4-, 8-, and 16-band processors (CIS-like), a 6/20 band processor (SPEAK-like), and unprocessed speech. Recognition scores for vowels and consonants decreased as the S/N level worsened in all conditions, as expected. Phoneme recognition threshold (PRT) was defined as the S/N at which the recognition score fell to 50% of its level in quiet. The unprocessed speech had the best PRT, which worsened as the number of bands decreased. Recognition of vowels and consonants was further measured in three Nucleus-22 cochlear implant users using either their normal SPEAK speech processor or a custom processor with a four-channel CIS strategy. The best cochlear implant user showed similar performance with the CIS strategy in quiet and in noise to that of normal-hearing listeners when listening to correspondingly spectrally degraded speech. These findings suggest that the noise susceptibility of cochlear implant users is at least partly due to the loss of spectral resolution. Efforts to improve the effective number of spectral information channels should improve implant performance in noise.  相似文献   

8.
Research has shown that speaking in a deliberately clear manner can improve the accuracy of auditory speech recognition. Allowing listeners access to visual speech cues also enhances speech understanding. Whether the nature of information provided by speaking clearly and by using visual speech cues is redundant has not been determined. This study examined how speaking mode (clear vs. conversational) and presentation mode (auditory vs. auditory-visual) influenced the perception of words within nonsense sentences. In Experiment 1, 30 young listeners with normal hearing responded to videotaped stimuli presented audiovisually in the presence of background noise at one of three signal-to-noise ratios. In Experiment 2, 9 participants returned for an additional assessment using auditory-only presentation. Results of these experiments showed significant effects of speaking mode (clear speech was easier to understand than was conversational speech) and presentation mode (auditory-visual presentation led to better performance than did auditory-only presentation). The benefit of clear speech was greater for words occurring in the middle of sentences than for words at either the beginning or end of sentences for both auditory-only and auditory-visual presentation, whereas the greatest benefit from supplying visual cues was for words at the end of sentences spoken both clearly and conversationally. The total benefit from speaking clearly and supplying visual cues was equal to the sum of each of these effects. Overall, the results suggest that speaking clearly and providing visual speech information provide complementary (rather than redundant) information.  相似文献   

9.
HINT list equivalency was examined using 24 listeners between 60 and 70 years old who had sensorineural hearing impairment. A Greco-Latin square design was used to ensure that each list was presented an equal number of times per condition. Four conditions were tested: (1) speech in quiet, (2) speech in 65 dBA noise with noise at 0 degrees azimuth, (3) speech in 65 dBA noise with noise at 90 degrees azimuth, and (4) speech in 65 dBA noise with noise at 270 degrees azimuth. Speech materials were always presented at 0 degrees azimuth. Overall mean scores ranged from 29.9 dBA for the quiet condition to 63.4 dBA for the noise at 0 degrees azimuth condition. A significant difference was found between Lists 13 and 16 only. This was attributed to audibility differences among the listeners. Therefore, the 25 HINT lists should be considered equivalent for older populations with similar hearing impairment. The HINT lists can be used for relative measures, such as comparison of aided versus unaided sentence SRTs or comparison of 2 different hearing aids.  相似文献   

10.
Speech recognition was measured in listeners with normal hearing and in listeners with sensorineural hearing loss under conditions that simulated hearing aid processing in a low-pass and speech-shaped background noise. Differing amounts of low-frequency gain reduction were applied during a high-frequency monosyllable test and a sentence level test to simulate the frequency responses of some commercial hearing aids. The results showed an improvement in speech recognition with low-frequency gain reduction in the low-pass noise, but not in the speech-shaped background noise. Masking patterns also were obtained with the two background noises at 70 and 80 dB SPL to compare with the speech results. There was no correlation observed between the masking results and the improvement in speech recognition with low-frequency gain reduction.  相似文献   

11.
The benefits of active noise reduction (ANR) hearing protectors were assessed in two groups of normal-hearing subjects, under and over the age of 40 years, and one group with bilateral high-tone hearing loss. Subjects were tested with the ears unoccluded and fitted with conventional sound attenuating E-A-R foam plugs, E-A-R HI-FI plugs, and Bilsom Viking muffs; and one ANR muff, the Peltor 7004. Within each ear condition, measurements were made in quiet of hearing thresholds for frequencies between 0.25 kHz and 8 kHz, duration and frequency difference limens, and word recognition. Hearing thresholds and word recognition were also measured in a background of impulsive cable swager noise. The E-A-R foam plug provided the highest and the E-A-R HI-FI plug, the lowest attenuation. The Bilsom Viking and Peltor muffs were virtually identical and midway between. An additional 10 dB of sound reduction was realized at 0.25 kHz with ANR. The masking effect of the noise on hearing threshold decreased with an increase in attenuation. None of the devices compromised either duration or frequency discrimination. Word recognition in noise improved in normal listeners when protectors were worn. For the impaired subjects, word recognition with poor contextual cues decreased with an increase in sound attenuation, in both quiet and noise. Like older normal listeners, their scores were relatively higher with ANR.  相似文献   

12.
Young normal-hearing listeners and young-elderly listeners between 55 and 65 years of age, ranging from near-normal hearing to moderate hearing loss, were compared using different speech recognition tasks (consonant recognition in quiet and in noise, and time-compressed sentences) and working memory tasks (serial word recall and digit ordering). The results showed that the group of young-elderly listeners performed worse on both the speech recognition and working memory tasks than the young listeners. However, when pure-tone audiometric thresholds were used as a covariate variable, the significant differences between groups disappeared. These results support the hypothesis that sensory decline in young-elderly listeners seems to be an important factor in explaining the decrease in speech processing and working memory capacity observed at these ages. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Several studies have recently demonstrated that normal-hearing listeners are sensitive to short-term temporal asymmetry in the envelopes of sinusoidal or noise carriers. This paper presents a study in which cochlear implantees were presented trains of current pulses with temporally asymmetric envelopes through one channel of an implant that stimulates the auditory nerve directly, thereby bypassing cochlear processes. When the level of the stimuli was adjusted to fit their audibility range, the implantees were able to discriminate temporal asymmetry over a much wider range than normal-hearing listeners. The results suggest that the perception of temporal asymmetry is limited by compression in the normal cochlea.  相似文献   

14.
Earlier studies have indicated mid-frequency auditory dysfunction and depressed ability to discriminate speech in noise among noise-exposed listeners with high-frequency hearing loss. The present study was designed to determine whether mid-frequency dysfunction contributed to the depressed speech discrimination performance. Normal listeners, and noise-exposed and older listeners with high-frequency hearing loss listened to word lists presented in competing 'cocktail party' noise under unfiltered and low-pass filter conditions. In the low-pass filter condition the performance of the noise-exposed listeners was superior to that normal listeners, indicating that mid-frequency auditory dysfunction on the part of noise-exposed listeners does not contribute to their difficulties discriminating unfiltered speech in noise. The performance of the older listeners was below that of the two other groups in both filtered and unfiltered conditions, indicating greater difficulty discriminating speech than would be predicted only on the basis of high-frequency hearing loss.  相似文献   

15.
Adaptive linear filtering can improve effective speech-to-noise ratios by attenuating spectral regions with intense noise components to reduce the noise's spread of masking onto speech in neighboring regions. This mechanism was examined in static listening conditions for seven individuals with sensorineural hearing loss. Subjects were presented with nonsense syllables in an intense octave-band noise centered on 0.5, 1, or 2 kHz. The nonsense syllables were amplified to maximize. the articulation index; the noises were the same for all subjects. The processing consisted of applying frequency-selective attenuation to the speech-plus-noise with the goal of attenuating the frequency region containing the noise by various amounts. Consonant recognition scores and noise masking patterns were collected in all listening conditions. When compared with masking patterns obtained from normal-hearing subjects, all hearing-impaired subjects had higher masked thresholds at frequencies below, within, and above the masker band except for one subject who demonstrated additional masking above the masker only. Frequency-selective attenuation resulted in both increases and decreases in consonant recognition scores. Increases were associated with a release from upward spread of masking. Decreases were associated with applying too much attenuation such that speech energy within the masker band that was audible before processing was partially below threshold after processing. Fletcher's [Speech and Hearing in Communication (Van Nostrand, New York, 1953)] version of articulation theory (without modification) accounted for individual subject differences within the range of variability associated with the consonant recognition test in almost every instance. Hence, primary factors influencing speech reception benefits are characterized by articulation theory. Fletcher's theory appears well-suited to guide the design of control algorithms that will maximize speech recognition for individual listeners.  相似文献   

16.
For many people with profound hearing loss conventional hearing aids give only little support in speechreading. This study aims at optimizing the presentation of speech signals in the severely reduced dynamic range of the profoundly hearing impaired by means of multichannel compression and multichannel amplification. The speech signal in each of six 1-octave channels (125-4000 Hz) was compressed instantaneously, using compression ratios of 1, 2, 3, or 5, and a compression threshold of 35 dB below peak level. A total of eight conditions were composed in which the compression ratio varied per channel. Sentences were presented audio-visually to 16 profoundly hearing-impaired subjects and syllable intelligibility was measured. Results show that all auditory signals are valuable supplements to speechreading. No clear overall preference is found for any of the compression conditions, but relatively high compression ratios (> 3-5) have a significantly detrimental effect. Inspection of the individual results reveals that compression may be beneficial for one subject.  相似文献   

17.
OBJECTIVE: The aim of this study was to determine a relationship among selected listening conditions and amplification schemes that could be provided in a multiple memory hearing aid. DESIGN: The study consisted of three laboratory tests: 1) A screening test to select hearing impaired subjects who appeared to benefit from multiple amplification schemes. 2) A category scaling test to rank 16 amplification schemes in 15 listening conditions. The 16 schemes were simulated with a digital master hearing aid and comprised 5 linear systems and 11 compression characteristics. The 15 listening conditions comprised 6 listening environments combined with 2 or 3 response criteria. 3) A paired comparison test in which the two highest ranked amplification schemes were evaluated together with a reference linear frequency response (NAL) in a round-robin test. RESULTS: The screening test demonstrated that 21 hearing impaired people out of 25 with mild or moderate, flat or gently sloping hearing losses appeared to benefit from multiple amplification schemes. Age or audiometric factors did not serve to discriminate between those who selected different schemes and those who did not. In general, the NAL-response was preferred or was as good as any other for listening to speech in quiet, speech in reverberation, speech in babble-noise, and for naturalness of all listening environments. The subjects consistently selected an amplification scheme other than the NAL-response for four specific listening conditions. The findings suggest that substantial high-frequency compression is preferred for the ease of understanding multiple talkers, whose voices differ in overall level, in quiet environments. The annoyance of low-frequency background noise can be reduced by low-frequency compression, whereas a frequency response steeper than the NAL-response makes it easier to understand speech in low frequency background noise. Finally, a frequency response flatter than the NAL-response can be used to make a high-frequency background noise sound less annoying. CONCLUSION: Hearing aid users with mild or moderate, flat or gently sloping hearing losses, fitted with equal and sufficient variation in amplification, prefer different amplification schemes depending on the number of talkers, the background noise and the response criterion.  相似文献   

18.
A central question in psycholinguistic research is how listeners isolate words from connected speech despite the paucity of clear word-boundary cues in the signal. A large body of empirical evidence indicates that word segmentation is promoted by both lexical (knowledge-derived) and sublexical (signal-derived) cues. However, an account of how these cues operate in combination or in conflict is lacking. The present study fills this gap by assessing speech segmentation when cues are systematically pitted against each other. The results demonstrate that listeners do not assign the same power to all segmentation cues; rather, cues are hierarchically integrated, with descending weights allocated to lexical, segmental, and prosodic cues. Lower level cues drive segmentation when the interpretive conditions are altered by a lack of contextual and lexical information or by white noise. Taken together, the results call for an integrated, hierarchical, and signal-contingent approach to speech segmentation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
The consonant perception of 15 subjects with mild to moderate sensorineural hearing loss was evaluated using linear amplification and two different types of compression amplification. A specially modified hearing aid was used which allowed for variation of the amplifier input/output function in three steps, such that the compression ratio could be set to 1 (linear), 1.3 or 1.8. The Nonsense Syllable Test (NST) was recorded through the aid in quiet and in two different noise conditions (four-talker babble and a background noise with sharp intermittent sounds), and replayed to the listeners through headphones. No differences in consonant perception were found between the different types of amplification in the quiet condition. In the babble condition, consonant perception was significantly better with linear amplification than with either form of compression. In the sharp noise condition, there was no difference in performance between linear amplification and compression amplification with the ratio of 1.8. Consonant perception was adversely affected, however, by the compression and amplification with the ratio of 1.3 in this condition. Overall NST results and results for particular classes of consonants are discussed.  相似文献   

20.
This investigation examined the abilities of younger and older listeners to discriminate and identify temporal order of sounds presented in tonal sequences. It was hypothesized that older listeners would exhibit greater difficulty than younger listeners on both temporal processing tasks, particularly for complex stimulus patterns. It was also anticipated that tone order discrimination would be easier than tone order identification for all listeners. Listeners were younger and older adults with either normal hearing or mild-to-moderate sensorineural hearing losses. Stimuli were temporally contiguous three-tone sequences within a 1/3 octave frequency range centered at 4000 Hz. For the discrimination task, listeners discerned differences between standard and comparison stimulus sequences that varied in tonal temporal order. For the identification task, listeners identified tone order of a single sequence using labels of relative pitch. Older listeners performed more poorly than younger listeners on the discrimination task for the more complex pitch patterns and on the identification task for faster stimulus presentation rates. The results also showed that order discrimination is easier than order identification for all listeners. The effects of hearing loss on the ordering tasks were minimal.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号