首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Auditory perception with hearing protectors was assessed in three groups of subjects, two with normal hearing, but differing in age, and one with moderate bilateral sensorineural hearing loss. Individuals were tested with the ears unoccluded, and fitted with each of two level-dependent ear muffs and their conventional level-independent counterparts. One of the former devices provided limited amplification. In each of these five ear conditions, the threshold of audibility for one-third octave noise bands centered at 500, 1,000, 2,000 and 4,000 Hz, consonant discrimination, and word recognition were measured in quiet and in a continuous impulse noise background. The results showed that the attenuation of sounds (i.e. the difference between protected and unoccluded thresholds) in quiet did not vary as a function of age or hearing loss for any of the four protectors. In noise, the difference between protected and unoccluded listening was close to zero, as long as hearing was normal. With hearing loss as a factor, there was a significant increment in the protected threshold, the amount determined by the device. Word recognition in quiet was adversely affected in normal-hearing listeners by the three attenuating devices but improved in noise relative to unoccluded listening. Amplification had a deleterious effect for both consonant discrimination and word recognition in noise. In hearing-impaired listeners, speech perception was impeded by all four muffs but less so in quiet with limited amplification.  相似文献   

2.
The prevalence of MHL in 202, 1 to 6 year-old children with communication disorders who visited our clinic in 1991 was investigated. 1) 31% of the subjects had MHL bilaterally. The prevalence of MHL was 44% at age 1 year, 20% at age 2 years, 36% at age 3 years, 24% at age 4 years, 39% at age 5 years, 33% at age 6 years. 2) 88% of children with MHL had OME, 10% had mild sensorineural hearing loss, and 2% had cerminous plug. 3) The prevalence of MHL in children with mental retardation and autistic disorders was 9%, that with stuttering was 9%, that of OME accompanied by moderate and severe hearing disorders was 6%. 4) The primary causes in 191, except for those with stuttering, were as follows; the prevalence of MHL was 30%, that of mental retardation and autistic disorders was 24%, and that of articulation disorders was 28%. 5) On the other hand, the prevalence of MHL in children with retarded language development and articulation disorders was 30%, which was significantly higher than that of the other communication disorders. Accordingly, the results of this study suggest that MHL in early childhood greatly influences communication disorders.  相似文献   

3.
Current models of reading and speech perception differ widely in their assumptions regarding the interaction of orthographic and phonological information during language perception. The present experiments examined this interaction through a 2-alternative, forced-choice paradigm, and explored the nature of the connections between graphemic and phonemic processing subsystems. Exps 1 and 2 demonstrated a facilitation-dominant influence (i.e., benefits exceed costs) of graphemic contexts on phoneme discrimination, which is interpreted as a sensitivity effect. Exps 3 and 4 demonstrated a symmetrical influence (i.e., benefits equal costs) of phonemic contexts on grapheme discrimination, which can be interpreted as either a bias effect, or an equally facilitative/inhibitory sensitivity effect. General implications for the functional architecture of language processing models are discussed, as well as specific implications for models of visual word recognition and speech perception. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Normal-hearing and hearing-impaired listeners were tested to determine F0 difference limens for synthetic tokens of 5 steady-state vowels. The same stimuli were then used in a concurrent-vowel labeling task with the F0 difference between concurrent vowels ranging between 0 and 4 semitones. Finally, speech recognition was tested for synthetic sentences in the presence of a competing synthetic voice with the same, a higher, or a lower F0. Normal-hearing listeners and hearing-impaired listeners with small F0-discrimination (deltaF0) thresholds showed improvements in vowel labeling when there were differences in F0 between vowels on the concurrent-vowel task. Impaired listeners with high deltaF0 thresholds did not benefit from F0 differences between vowels. At the group level, normal-hearing listeners benefited more than hearing-impaired listeners from F0 differences between competing signals on both the concurrent-vowel and sentence tasks. However, for individual listeners, deltaF0 thresholds and improvements in concurrent-vowel labeling based on F0 differences were only weakly associated with F0-based improvements in performance on the sentence task. For both the concurrent-vowel and sentence tasks, there was evidence that the ability to benefit from F0 differences between competing signals decreases with age.  相似文献   

5.
Traditional word-recognition tests typically use phonetically balanced (PB) word lists produced by one talker at one speaking rate. Intelligibility measures based on these tests may not adequately evaluate the perceptual processes used to perceive speech under more natural listening conditions involving many sources of stimulus variability. The purpose of this study was to examine the influence of stimulus variability and lexical difficulty on the speech-perception abilities of 17 adults with mild-to-moderate hearing loss. The effects of stimulus variability were studied by comparing word-identification performance in single-talker versus multiple-talker conditions and at different speaking rates. Lexical difficulty was assessed by comparing recognition of "easy" words (i.e., words that occur frequently and have few phonemically similar neighbors) with "hard" words (i.e., words that occur infrequently and have many similar neighbors). Subjects also completed a 20-item questionnaire to rate their speech understanding abilities in daily listening situations. Both sources of stimulus variability produced significant effects on speech intelligibility. Identification scores were poorer in the multiple-talker condition than in the single-talker condition, and word-recognition performance decreased as speaking rate increased. Lexical effects on speech intelligibility were also observed. Word-recognition performance was significantly higher for lexically easy words than lexically hard words. Finally, word-recognition performance was correlated with scores on the self-report questionnaire rating speech understanding under natural listening conditions. The pattern of results suggest that perceptually robust speech-discrimination tests are able to assess several underlying aspects of speech perception in the laboratory and clinic that appear to generalize to conditions encountered in natural listening situations where the listener is faced with many different sources of stimulus variability. That is, word-recognition performance measured under conditions where the talker varied from trial to trial was better correlated with self-reports of listening ability than was performance in a single-talker condition where variability was constrained.  相似文献   

6.
7.
Acoustic reflex thresholds (ART) and loudness discomfort levels (LDL) were obtained from 51 ears of 34 deaf children using broad band noise and pure tones. Both thresholds and levels were recorded using the psychophysical method of tracking. Results indicate that the relationship between ART and LDL is at variance with similar data for normal hearing individuals. Specifically, in over 39% of these deaf children LDL was obtained at a lower intensity than ART. Implications for hearing aid fitting with deaf children are discussed.  相似文献   

8.
This study analyzed the effects of auditory impairment, age and sex on the auditory brainstem response (ABR) wave latencies. ABR wave I, wave V and I-V interval measures were extracted from the clinical records of 201 patients with cochlear hearing loss. Females had consistently earlier wave V latencies and shorter I-V intervals than males. No age effects were observed. Degree of impairment had a systematic effect on ABR wave latencies and I-V intervals. Wave I displayed latency extension with increasing levels of high-frequency hearing loss, whilst for wave V increases in latency were dependent upon both degree and slope of the hearing loss. Present results suggest that many of the previously reported sex differences and variable interactions seen for the ABR can be accounted for by differences in the underlying distribution of audiogram shapes within and between study populations. Different audiometric configurations were found to produce consistent differential effects on both wave I and wave V latency and thus influence the I-V interval. This study underlines the need to develop a more detailed model of impairment effects if correction factors are to be employed more effectively in ABR testing for retrocochlear pathology.  相似文献   

9.
Reviews recent experimental findings that show that the perception of phonetic distinctions relies on the integration of multiple acoustic cues and is sensitive to the surrounding context in specific ways. Most of these effects have correspondences in speech production and are readily explained by the assumption that listeners make continuous use of their tacit knowledge of speech patterns. A general auditory theory that does not make reference to the specific origin and characteristics of speech can, at best, handle only a small portion of the phenomena reviewed here. Special emphasis is placed on studies that obtained different patterns of results depending on whether the same stimuli were perceived as speech or as nonspeech. Findings provide strong empirical evidence for the existence of a speech-specific mode of perception. (4? p ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
BACKGROUND: Occupational hearing loss is the most accepted occupational disease. The assessment should be conducted in accordance with the "K?nigsteiner Merkblatt" which appeared in the fourth completely revised edition in 1996. Determination of degree of disability is mainly based on speech audiometry. Adapted complete word understanding is most important. In special cases only sinus-tone audiometry is used for the assessment. Knowledge about common and uncommon schedules is important for the assessment. METHODS AND PATIENTS: The results of 200 audio metrical examinations in case of professional hearing loss have been evaluated with eight different schedules. Four of these schedules for determination of hearing loss are based on sinus tone audiometry. Boenninghaus and R?ser's schedule uses speech audiometry under consideration of simple and adapted complete word understanding. Lehnhardt's schedule uses sinus-tone and speech audiometry for determination of degree of disability. Further on it is shown that the complete word understanding is the most important parameter for the quantitative determination of permanent noise induced hearing loss. It is even possible to determine the degree of disablement only using the complete word understanding. For all cases, the eight schedules were used to calculate the average hearing loss and the average degree of disability. Further on it was shown in how many cases-according to each schedule-a degree of disability of less than 10%, 10 to 15%, 20%, and more than 20% was calculated. RESULTS AND CONCLUSION: Comparing these eight schedules, it was shown that the use of adaptec complete word understanding increases the number of cases with 10% and 20% degree of disability. Using R?ser's schedule of 1980, the number of minimal handicap increases. With the new "K?nigsteiner Merkblatt" a 10% degree of disability is reached more easily than it was previously.  相似文献   

11.
A four-generation family suffering from an autosomal-dominant, congenital, nonprogressive, nonsyndromic hearing loss was found in a rural region of Austria. The hearing loss was moderate to severe, a pure tone audiogram showing a U-shaped form with maximum loss at 2, 000 Hz. An initial genome search led to a lod score of 3.01 with markers on chromosome 15. This locus was registered as DFNA8 in the HUGO data base. Further sampling of the family, however, yielded data that reduced the maximal lod score with chromosome 15 markers to 1.81. The genome search was restarted using an ABITM genotyper, which eventually detected several positive two-point lod scores with markers from the long arm of chromosome 11. The highest value was 3. 6, which was seen with the marker D11S934. Haplotype analysis excluded the gene from the chromosomal region proximal from D11S898 and distal to D11S1309. These results place the gene in the region of the hearing loss gene DFNA12. Recent evidence suggests that the somewhat different phenotypes found in these two families are due to two different mutations in the human alpha-tectorine gene (Verhoeven et al., 1998).  相似文献   

12.
Simultaneous communications combines both spoken and manual modes to produce each word of an utterance. This study investigated the potential influence of alterations in the temporal structure of speech produced by inexperienced signers during simultaneous communication on the perception of final consonant voicing. Inexperienced signers recorded words that differed only in the voicing characteristic of the final consonant under two conditions: (1) speech alone and (2) simultaneous communication. The words were subsequently digitally edited to remove the final consonant and played to 20 listeners who, in a forced-choice paradigm, circled the word they thought they heard. Results indicated that accurate perception of final consonant voicing was not impaired by changes in the temporal structure of speech that accompanied the inexperienced signers' simultaneous communication.  相似文献   

13.
14.
Rats bearing the Walker 256 intramuscular carcinosarcoma were treated intraperitoneally with tritium-labeled vernolepin or with its nontumor-inhibitory methanol adduct. Following treatment with 3H-vernolepin on several different dosage schedules, the tumors were found to contain significantly more radioactivity per gram wet weight than control tissue (muscle from the contralateral limb). After the administration of the nontumor-inhibitory methanol adduct, no such difference was observed. The distribution of radioactivity in various other organs (liver, kidney, spleen, intestine, lung, heat, fat, blood, and brain) was measured following treatment with the parent compound (3H-vernolepin). The implications of these data in terms of the suggested mechanism of action of sesquiterpene lactone tumor inhibitors is discussed.  相似文献   

15.
How do listeners integrate temporally distributed phonemic information into coherent representations of syllables and words? For example, increasing the silence interval between the words gray chip may result in the percept great chip, whereas increasing the duration of fricative noise in chip may alter the percept to great ship (B. H. Repp, A. M. Liberman, T. Eccardt, and D. Pesetsky, 1978). The ARTWORD neural model quantitatively simulates such context-sensitive speech data. In ARTWORD, sequentially stored phonemic items in working memory provide bottom-up input to unitized list chunks that group together sequences of items of variable length. The list chunks compete with each other. The winning groupings feed back to establish a resonance which temporarily boosts the activation levels of selected items and chunks, thereby creating an emergent conscious percept whose properties match such data. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Recordings were made of female teachers reciting congruent and contradictory messages. In the congruent condition, the verbal and tonal elements of the communication were consistent. In the contradictory condition, the tone of voice was incongruent with the words spoken. All the congruent and contradictory vocal expressions were arranged in random order on a single tape and played to "normal" and "disturbed" boys of different ages (Grades 2, 4, and 6). Significant effects were found for type of speech and age: Ss reacted more negatively to contradictory than to congruent speech, and younger Ss responded more negatively to contradiction than did older ones. The disturbed Ss reacted significantly more negatively than normal Ss to contradictory messages. (15 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
OBJECTIVE: The study was conducted to determine the relationship between measures of auditory performance in elderly individuals. Specifically, its goal was to uncover a set of measures correlated with the set of measures of speech understanding under specific conditions of interference to gain a better understanding of decline of the "cocktail party effect" in aging. DESIGN: Audiological status and auditory performance of a group of elderly (60- to 81-yr-old) individuals were determined through a test battery. When present, the hearing loss of elderly subjects was symmetrical in the two ears and, at most, moderate. The battery included tests of speech intelligibility on the word and sentence levels, with and without the presence of interfering speech. In addition pure-tone and speech reception thresholds, perception of spectrally or temporally distorted speech and auditory resolution of frequency, time, and space were tested. Two tests received special consideration: the Speech Perception In Noise Test and the Modified Rhyme Reverberation Test. RESULTS: Results indicated that, despite the nearly normal hearing levels that characterized much of the subject group, auditory sensitivity measures showed persistent correlation to all other measures, with the exception of auditory resolution regarding frequency, time, and space. As a set, sensitivity measures accounted for more than 85% of the variance. When auditory sensitivity was controlled for, other factors underlying speech processing in the presence of interfering stimuli were uncovered, factors most likely related to the ability to perceptually segregate one speech signal from another. CONCLUSIONS: The findings suggest that, to determine the relationship between audiological/auditory test results of an elderly population, it is important to remove the effects of hearing loss through appropriate statistical methods.  相似文献   

18.
According to one approach to speech perception, listeners perceive speech by applying general pattern matching mechanisms to the acoustic signal (e.g., Diehl, Lotto, & Holt, 2004). An alternative is that listeners perceive the phonetic gestures that structured the acoustic signal (e.g., Fowler, 1986). The two accounts have offered different explanations for the phenomenon of compensation for coarticulation (CfC). An example of CfC is that if a speaker produces a gesture with a front place of articulation, it may be pulled slightly backwards if it follows a back place of articulation, and listeners' category boundaries shift (compensate) accordingly. The gestural account appeals to direct attunement to coarticulation to explain CfC, whereas the auditory account explains it by spectral contrast. In previous studies, spectral contrast and gestural consequences of coarticulation have been correlated, such that both accounts made identical predictions. We identify a liquid context in Tamil that disentangles contrast and coarticulation, such that the two accounts make different predictions. In a standard CfC task in Experiment 1, gestural coarticulation rather than spectral contrast determined the direction of CfC. Experiments 2, 3, and 4 demonstrated that tone analogues of the speech precursors failed to produce the same effects observed in Experiment 1, suggesting that simple spectral contrast cannot account for the findings of Experiment 1. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
OBJECTIVE: This study was designed to determine the prevalence of minimal sensorineural hearing loss (MSHL) in school-age children and to assess the relationship of MSHL to educational performance and functional status. DESIGN: To determine prevalence, a single-staged sampling frame of all schools in the district was created for 3rd, 6th, and 9th grades. Schools were selected with probability proportional to size in each grade group. The final study sample was 1218 children. To assess the association of MSHL with educational performance, children identified with MSHL were assigned as cases into a subsequent case-control study. Scores of the Comprehensive Test of Basic Skills (4th Edition) (CTBS/4) then were compared between children with MSHL and children with normal hearing. School teachers completed the Screening Instrument for Targeting Education Risk (SIFTER) and the Revised Behavior Problem Checklist for a subsample of children with MSHL and their normally hearing counterparts. Finally, data on grade retention for a sample of children with MSHL were obtained from school records and compared with school district norm data. To assess the relationship between MSHL and functional status, test scores of all children with MSHL and all children with normal hearing in grades 6 and 9 were compared on the COOP Adolescent Chart Method (COOP), a screening tool for functional status. RESULTS: MSHL was exhibited by 5.4% of the study sample. The prevalence of all types of hearing impairment was 11.3%. Third grade children with MSHL exhibited significantly lower scores than normally hearing controls on a series of subtests of the CTBS/4; however, no differences were noted at the 6th and 9th grade levels. The SIFTER results revealed that children with MSHL scored poorer on the communication subtest than normal-hearing controls. Thirty-seven percent of the children with MSHL failed at least one grade. Finally, children with MSHL exhibited significantly greater dysfunction than children with normal hearing on several subtests of the COOP including behavior, energy, stress, social support, and self-esteem. CONCLUSIONS: The prevalence of hearing loss in the schools almost doubles when children with MSHL are included. This large, education-based study shows clinically important associations between MSHL and school behavior and performance. Children with MSHL experienced more difficulty than normally hearing children on a series of educational and functional test measures. Although additional research is necessary, results suggest the need for audiologists, speech-language pathologists, and educators to evaluate carefully our identification and management approaches with this population. Better efforts to manage these children could result in meaningful improvement in their educational progress and psychosocial well-being.  相似文献   

20.
Much research on cognitive competence in normal older adults has documented age and sex differences. The authors used new cross-sectional data from the Victoria Longitudinal Study (VLS) (n = 386; age 61 to 95 years) to examine how health and biological age influence age and sex differences in cognitive aging. The authors found evidence for both moderating and mediating influences. Age differences were moderated by health status, such that the negative effects of age were most pronounced among participants of relatively better health. Sex differences were moderated by health and were more pronounced among participants reporting comparatively poorer health. Although health mediated a notable amount of age-related cognitive variation, BioAge mediated considerably more variance, even after statistical control for differences in health. A complex pattern emerged for the mediation of sex differences: Although BioAge accounted for sex-related variation in cognitive performance, health operated to suppress these differences. Overall, both health and BioAge predicted cognitive variation independently of chronological age. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号