首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In speech perception, phonetic information can be acquired optically as well as acoustically. The motor theory of speech perception holds that motor control structures are involved in the processing of visible speech, whereas perceptual accounts do not make this assumption. Motor involvement in speech perception was examined by showing participants response-irrelevant movies of a mouth articulating /bΛ/ or /dΛ/ and asking them to verbally respond with either the same or a different syllable. The letters "Ba" and "Da" appeared on the speaker's mouth to indicate which response was to be performed. A reliable interference effect was observed. In subsequent experiments, perceptual interference was ruled out by using response-unrelated imperative stimuli and by preexposing the relevant stimulus information. Further, it was demonstrated that simple directional features (opening and closing) do not account for the effect. Rather, the present study provides evidence for the view that visible speech is processed up to a late, response-related processing stage, as predicted by the motor theory of speech perception. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
In this article the author proposes an episodic theory of spoken word representation, perception, and production. By most theories, idiosyncratic aspects of speech (voice details, ambient noise, etc.) are considered noise and are filtered in perception. However, episodic theories suggest that perceptual details are stored in memory and are integral to later perception. In this research the author tested an episodic model (MINERVA 2; D. L. Hintzman, 1986) against speech production data from a word-shadowing task. The model predicted the shadowing-response-time patterns, and it correctly predicted a tendency for shadowers to spontaneously imitate the acoustic patterns of words and nonwords. It also correctly predicted imitation strength as a function of "abstract" stimulus properties, such as word frequency. Taken together, the data and theory suggest that detailed episodes constitute the basic substrate of the mental lexicon. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Tested the hypothesis that children with specific disabilities in reading may have subtle auditory and/or speech perception deficits by comparing the performance of 14 severely disabled readers (aged 8–14 yrs) with 14 normal readers in 4 speech perception tasks. Results indicate that perception was significantly less categorical among the severely disabled readers in 3 of the 4 speech perception tasks. The possible implications of this small, but significant, difference are discussed in relation to previous conflicting findings concerning reading performance in dyslexia. (French abstract) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
A fundamental goal of an information-processing approach to speech perception is to specify the levels of analysis between the initial sensory coding of the signal and the recognition of the phonetic sequence that it conveys. A series of experiments provides evidence for at least 3 qualitatively different levels of analysis involved in the perception of speech. Several properties for the representations of each level are described, including a locus (peripheral or monaurally driven vs. central or binaurally driven), a stimulus domain, and the mechanisms involved in response adjustment as a function of repeated stimulation. The stimulus domains for the 3 levels are (a) processes that deal with simple acoustic patterns, (b) processes that integrate more complex acoustic patterns, and (c) processes that represent categorical or phonetic information. The convergence among several different approaches used to determine levels of analysis supports the 3-level model. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
The development of speech perception during the 1st year reflects increasing attunement to native language features, but the mechanisms underlying this development are not completely understood. One previous study linked reductions in nonnative speech discrimination to performance on nonlinguistic tasks, whereas other studies have shown associations between speech perception and vocabulary growth. The present study examined relationships among these abilities in 11-month-old infants using a conditioned head-turn test of native and nonnative speech sound discrimination, nonlinguistic object-retrieval tasks requiring attention and inhibitory control, and the MacArthur-Bates Communicative Development Inventory (L. Fenson et al., 1993). Native speech discrimination was positively linked to receptive vocabulary size but not to the cognitive control tasks, whereas nonnative speech discrimination was negatively linked to cognitive control scores but not to vocabulary size. Speech discrimination, vocabulary size, and cognitive control scores were not associated with more general cognitive measures. These results suggest specific relationships between domain-general inhibitory control processes and the ability to ignore variation in speech that is irrelevant to the native language and between the development of native language speech perception and vocabulary. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Connectionist models of perception and cognition, including the process of deducing meaningful messages from patterns of acoustic waves emitted by vocal tracts, are developed and refined as human understanding of brain function, psychological processes, and the properties of massively parallel architectures advances. The present article presents several important contributions from diverse points of view in the area of connectionist modeling of speech perception and discusses their relative merits with respect to specific theoretical issues and empirical findings. TRACE, the Elman/Norris net, and Adaptive Resonance Theory constitute pivotal points exemplifying overall modeling success, progress in temporal representation, and plausible modeling of learning, respectively. Other modeling efforts are presented for the specific insights they offer, and the article concludes with a discussion of computational versus dynamic modeling of phonological processes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Current models of reading and speech perception differ widely in their assumptions regarding the interaction of orthographic and phonological information during language perception. The present experiments examined this interaction through a 2-alternative, forced-choice paradigm, and explored the nature of the connections between graphemic and phonemic processing subsystems. Exps 1 and 2 demonstrated a facilitation-dominant influence (i.e., benefits exceed costs) of graphemic contexts on phoneme discrimination, which is interpreted as a sensitivity effect. Exps 3 and 4 demonstrated a symmetrical influence (i.e., benefits equal costs) of phonemic contexts on grapheme discrimination, which can be interpreted as either a bias effect, or an equally facilitative/inhibitory sensitivity effect. General implications for the functional architecture of language processing models are discussed, as well as specific implications for models of visual word recognition and speech perception. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Memorializes Alvin M. Liberman, known for his work in the field of speech perception. He was a pioneer in the experimental study of speech, and he contributed a widely cited motor theory of speech perception. His early research on the failure to accurately perceive words presented in acoustic alphabets led to the discovery that speech is not composed of discrete, segment-sized units, and the acoustic structure for consonants and vowels is highly context sensitive. From this finding, he developed the motor theory of speech perception, in which perception of speech is a component of the human biological adaptation of language use. His sustained interest in this field of research expanded to include theories of reading difficulties and the discovery that children who are failing to learn to read on schedule characteristically lack phoneme awareness. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
The present investigation expanded on an earlier study by Miyamoto, Osberger, Todd, Robbins, Karasek, et al. (1994) who compared the speech perception skills of two groups of children with profound prelingual hearing loss. The first group had received the Nucleus multichannel cochlear implant and was tested longitudinally. The second group, who were not implanted and used conventional hearing aids, was tested at a single point in time. In the present study, speech perception scores were examined over time for both groups of children as a function of communication mode of the child. Separate linear regressions of speech perception scores as a function of age were computed to estimate the rate of improvement in speech perception abilities that might be expected due to maturation for the hearing aid users (n=58) within each communication mode. The resulting regression lines were used to compare the estimated rate of speech perception growth for each hearing aid group to the observed gains in speech perception made by the children with multichannel cochlear implants. A large number of children using cochlear implants (n=74) were tested over a long period of implant use (m=3.5 years) that ranged from zero to 8.5 years. In general, speech perception scores for the children using cochlear implants were higher than those predicted for a group of children with 101-110 dB HL of hearing loss using hearing aids, and they approached the scores predicted for a group of children with 90-100 dB HL of hearing loss using hearing aids.  相似文献   

10.
The use of electropalatography (EPG) in the teaching of new speech skills to deaf speakers, is not widely researched. Whether these skills can be maintained, or whether they can become fully automatic, without enough speech perception to enable auditory self-monitoring is therefore unclear. Most approaches to increasing speech intelligibility in deaf speakers, rightly, place an emphasis on maximizing residual hearing and listening skills. However, a small amount of evidence exists which suggests that speech production can aid speech perception. This research aimed to measure change in speech perception, through use of the speech pattern audiometer (SPA), whilst working on speech production with the electropalatograph. Intervention took place over a year. The issues of maintenance and generalization are discussed.  相似文献   

11.
Duplex perception has been interpreted as revealing distinct systems for general auditory perception and speech perception. The systems yield distinct experiences of the same acoustic signal, one conforming to the acoustic structure itself and the other to its source in vocal-tract activity. However, this interpretation has not been tested by examining whether duplex perception can be obtained for nonspeech sounds that are not plausibly perceived by a specialized system. In 5 experiments, some of the phenomena associated with duplex perception of speech are replicated using the sound of a slamming door. Similarities between 26 university students' responses to syllables and door sounds are striking enough to suggest that some conclusions in the speech literature should be tempered that (1) duplex perception is special to sounds for which there are perceptual modules and (2) duplex perception occurs because distinct systems have rendered different percepts of the same acoustic signal. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
To explore the relationship between the processing of melodic and rhythmic patterns in speech and music, we tested the prosodic and musical discrimination abilities of two "amusic" subjects who suffered from music perception deficits secondary to bilateral brain damage. Prosodic discrimination was assessed with sentence pairs where members of a pair differed by intonation or rhythm, and musical discrimination was tested using musical-phrase pairs derived from the prosody of the sentence pairs. This novel technique was chosen to make task demands as comparable as possible across domains. One amusic subject showed good performance on both linguistic and musical discrimination tasks, while the other had difficulty with both tasks. In both subjects, level of performance was statistically similar across domains, suggesting shared neural resource for prosody and music. Further tests suggested that prosody and music may overlap in the processes used to maintain auditory patterns in working memory.  相似文献   

13.
A review of the literature shows that in the past decade, most theoretical accounts of speech perception have stressed the role of feature detectors in mapping initial auditory transforms of the speech signal onto features or phonemes. Several types of experiments are often viewed as supporting the feature detector hypothesis. These include electrophysiological studies of the visual and auditory systems of nonhuman species, studies of categorical perception of speech sounds by human adults and infants, and especially, studies of selective adaptation to speech sounds. The present article argues that evidence for feature detectors in speech perception is equivocal at best and that there are compelling reasons to reject the detector hypothesis (e.g., lack of firm consensus regarding the nature of detector outputs). (3? p ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
These remarks are in response to "Role of articulation in speech perception: Clues from production"ony Bj?rn Lindblom. It is suggested that the form in which the lexicon is stored includes both segments and distinctive features, and this representation is neutral with respect to articulatory and the acoustic domains. The process by which features are determined from the sound requires that patterns of acoustic properties be identified. In developing models of speech perception, knowledge of articulatory-acoustic relations can be a guide in defining these properties, but it is not necessary for the models to assign primary status to articulation.  相似文献   

15.
Lateralized displays are used widely to investigate hemispheric asymmetry in language perception. However, few studies have used lateralized displays to investigate hemispheric asymmetry in visual speech perception, and those that have yielded mixed results. This issue was investigated in the current study by presenting visual speech to either the left hemisphere (LH) or the right hemisphere (RH) using the face as recorded (normal), a mirror image of the normal face (reversed), and chimeric displays constructed by duplicating and reversing just one hemiface (left or right) to form symmetrical images (left-duplicated, right-duplicated). The projection of displays to each hemisphere was controlled precisely by an automated eye-tracking technique. Visual speech perception showed the same, clear LH advantage for normal and reversed displays, a greater LH advantage for right-duplicated displays, and no hemispheric difference for left-duplicated displays. Of particular note is that perception of LH displays was affected greatly by the presence of right-hemiface information, whereas perception of RH displays was unaffected by changes in hemiface content. Thus, when investigated under precise viewing conditions, the indications are not only that the dominant processes of visual speech perception are located in the LH but that these processes are uniquely sensitive to right-hemiface information. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Little is known about the processes underlying the nonlinear relationship between acoustics and speech perception. In Exp 1, the effects of systematic variation of a single acoustic parameter (silent gap duration between a natural utterance of s and a synthetic vowel ay) on judgments of speech category were explored. The resulting shifts in category boundary between say and stay showed rich dynamics, including hysteresis, contrast, and critical boundary effects. A dynamical model is proposed to account for the observed patterns. Exp 2 evaluated 1 prediction of the model, that changing the relative stability of the 2 percepts allows categorical switching. In agreement with the model, an increase in the number of stimulus repetitions maximized the frequency of judgments of category change near the boundary. Thus, a dynamical approach affords the rudiments for a theory of the effects of temporal context on speech categorization. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
18.
Investigated the effectiveness of nonspeech sounds as auditory stimuli in eliciting a nonverbal analog to the verbal transformation effect phenomenon. 25 college students were given 5 stimuli (3 pure tones of 250, 1,000, and 4,000 Hz, white noise, and a 5-note musical motif). Results indicate that the transformations elicited by the nonspeech stimuli were similar in number of forms elicited, specific forms, and types of transformations to those produced by speech stimuli. Implications for the proposed mechanisms underlying the perception of speech are discussed. (French summary) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with similar spectrotemporal structure to investigate to what extent humans have adapted to the specific characteristics of natural audiovisual speech. We manipulated spectrotemporal structure of the auditory signal, stimulus length, and task context. Results indicate that the temporal integration window is narrower and more asymmetric for speech than for nonspeech signals. When perceiving audiovisual speech, subjects tolerate visual leading asynchronies, but are nevertheless very sensitive to auditory leading asynchronies that are less likely to occur in natural speech. Thus, speech perception may be fine-tuned to the natural statistics of audiovisual speech, where facial movements always occur before acoustic speech articulation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Treatments for stuttering based on variants of Goldiamond's prolonged-speech procedure involve teaching clients to speak with novel speech patterns. Those speech patterns consist of specific skills, described with such terms as soft contacts, gentle onsets, and continuous vocalization. It might be expected that effective client learning of such speech skills would be dependent on clinicians' ability to reliably identify any departures from the correct production of such speech targets. The present study investigated clinicians' reliability in detecting such errors during a prolonged-speech treatment program. Results showed questionable intraclinician agreement and poor interclinician agreement. Nonetheless, the prolonged-speech program in question is known to be effective in controlling stuttered speech. The clinical and theoretical implications of these findings are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号