首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The kinematics of human jaw movements were assessed in terms of the three orientation angles and three positions that characterize the motion of the jaw as a rigid body. The analysis focused on the identification of the jaw's independent movement dimensions, and was based on an examination of jaw motion paths that were plotted in various combinations of linear and angular coordinate frames. Overall, both behaviors were characterized by independent motion in four degrees of freedom. In general, when jaw movements were plotted to show orientation in the sagittal plane as a function of horizontal position, relatively straight paths were observed. In speech, the slopes and intercepts of these paths varied depending on the phonetic material. The vertical position of the jaw was observed to shift up or down so as to displace the overall form of the sagittal plane motion path of the jaw. Yaw movements were small but independent of pitch, and vertical and horizontal position. In mastication, the slope and intercept of the relationship between pitch and horizontal position were affected by the type of food and its size. However, the range of variation was less than that observed in speech. When vertical jaw position was plotted as a function of horizontal position, the basic form of the path of the jaw was maintained but could be shifted vertically. In general, larger bolus diameters were associated with lower jaw positions throughout the movement. The timing of pitch and yaw motion differed. The most common pattern involved changes in pitch angle during jaw opening followed by a phase predominated by lateral motion (yaw). Thus, in both behaviors there was evidence of independent motion in pitch, yaw, horizontal position, and vertical position. This is consistent with the idea that motions in these degrees of freedom are independently controlled.  相似文献   

2.
In this study changes in upper lip and lower lip integrated electromyographic (IEMG) amplitude and temporal measures related to linguistic factors known for their influence on stuttering were investigated. Nonstuttering subjects first read and then verbalized sentences of varying length (sentence length factor), in which meaningless but phonologically appropriate character strings were varied in their position within the sentence (word position factor) and their size (word size factor). It was hypothesized that the production of stressed, vowel-rounding gestures of words in initial position, longer words, and words in longer sentences would be characterized by specific changes in IEMG amplitude that would reflect an increase in speech motor demands, intuitively defined as articulatory effort. Basically, the findings corroborated our assumptions, showing that words in sentence initial position have shorter word and vowel durations in combination with an increase in IEMG activity. Similarly, we found shorter vowel durations for longer words, and in sentence final position an increase in IEMG activity. For longer sentences we found a clear increase in speech rate, but contrary to our expectations a decrease in IEMG activity. It was speculated that this might relate to the use of a movement reduction strategy to allow higher speech rates with increased coarticulation. These findings were discussed both for their implications in normal speech production, as well as for their possible implications for explaining stuttering behavior. To this end our data can illustrate both why stutterers might run a higher risk of stuttering at these linguistic loci of stuttering, and why they might come up with a strategic solution to decrease the motor demands in speech production. The basic outcome of this study shows that higher order (linguistic) specifications can have clear effects on speech motor production.  相似文献   

3.
Progress in the knowledge of auditory processing of complex sounds has been made through coordinated psychophysical, physiological and theoretical studies of periodicity pitch and combination tones. Periodicity pitch is the basis for human perception of musical notes and pitch of voiced speech. The mechanism of perception involves harmonic pattern recognition on the complex Fourier frequency spectra generated by auditory frequency analysis. Combination tones are perceptible distortion tones generated within the cochlea by nonlinear interaction of component stimulus tones. Perception of periodicity pitch is quantitatively accounted for by a two-stage process of frequency analysis subject to random errors and significant nonlinearities, followed by a pattern recognizer that operates very efficiently to measure the period of musical and speech sounds. The basic characteristic of the first stage is a Gaussian standard error function that quantifies the randomness in aural estimation of frequencies of component tones in a complex tone stimulus. Efficient aural measurement of neural spike intervals from the eighth nerve provides a physiological account for the psychophysical characteristic of aural frequency analysis with complex sounds. Although cochlear filtering is an essential stage in auditory frequency analysis, neural time following, rather than details of the filter characteristics, is the decisive factor in determining the precision of aural frequency measurement. It is likely that peripheral auditory coding is similar for sounds in periodicity pitch and in speech perception, although the 'second stage' representing central processing would differ.  相似文献   

4.
Several theorists have suggested that infants use prosodic cues such as pauses, final lengthening, and pitch changes to identify linguistic units in speech. One potential difficulty with this proposal, however, is that the acoustic shape of an utterance is affected by many factors other than its syntax, including its phonetic, lexical, and discourse structure. This has raised questions about how the infant could use timing and pitch as cues to any aspect of linguistic structure without simultaneously factoring out other effects. Acoustic analyses of connected samples of spontaneous speech addressed to 13.5-14-month-old infants by American English- and by Japanese-speaking mothers revealed that both utterance- and phrase-level acoustic regularities were large enough to be detected in spontaneous speech without correcting for other influences on the same acoustic features. (1) Utterance-final vowels were lengthened and underwent exaggerated pitch changes in both languages, and (2) local acoustic changes in duration (English) or pitch (Japanese) were reliably associated with some phrase boundaries within utterances. These findings suggest that a naive listener could estimate a rough prosodic template for each language based on robust acoustic patterns in observed sentences. We discuss ways in which the learner could combine acoustic and distributional analyses across utterances to acquire language-specific variations in prosodic bracketing cues and to obtain indirect perceptual evidence for the internal structure of utterances.  相似文献   

5.
Representational momentum refers to the phenomenon that observers tend to incorrectly remember an event undergoing real or implied motion as shifted beyond its actual final position. This has been demonstrated in both visual and auditory domains. In 5 pitch discrimination experiments, listeners heard tone sequences that implied either linear, periodic, or null motions in pitch space. Their task was to judge whether the pitch of a probe tone following each sequence was the same or different from the final sequence tone. Results suggested that listeners made errors consistent with extrapolation of coherent pitch patterns (linear, periodic) but not with incoherent (null) ones. Hypotheses associated with internalized physical principles and pattern-based expectations are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Middle ear muscle responses associated with speech production were observed in normal-hearing, stapedectomized, and laryngectomized subjects. Impedance changes associated with speech production were monitored by an electroacoustic impedance bridge simultaneously with vocal output. Results from stapedectomized subjects indicate that the tensor tympani muscle contracts prior to vocalization and is part of the neurological pattern of speech production. Data collected from laryngectomized subjects suggest that the presence of sensory fibers from the larynx is not a prerequisite for middle ear muscle activity during speech production.  相似文献   

7.
Using Mandarin Chinese, a "tone language" in which the pitch contours of syllables differentiate words, the authors examined the acoustic modifications of infant-directed speech (IDS) at the syllable level to test 2 hypotheses: (a) the overall increase in pitch and intonation contour that occurs in IDS at the phrase level would not distort lexical pitch at the syllable level and (b) IDS provides exaggerates cues to lexical tones. Sixteen Mandarin-speaking mothers were recorded while addressing their infants and addressing an adult. The results indicate that IDS does not distort the acoustic cues that are essential to word meaning at the syllable level; evidence of exaggeration of the acoustic differences in IDS was observed, extending previous findings of phonetic exaggeration to the lexical level. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Review of the literature indicates that speech is controlled by an intricate closed-loop feedback system. To bring about feedback control of the speech musculature, the higher neural centers should be kept constantly aware of (a) spatial position, (b) direction of movement, and (c) rate of movement of the articulators. The feedback mechanisms existing within the tongue that can mediate such dynamic space-time information are described. The unique 3-dimensional arrangement of the lingual muscle-spindle network is structurally organized to operate as a built-in geometric reference system. This network is capable of signaling higher brain centers as to the changing length, position, and rate of movement of the tongue during the articulatory motions of human speech. The short-latency, cervical dorsal root pathway conducting hypoglossal afferent information is described as well as the cortical projections of this complex rapidly-acting feedback system. The implications of these neurophysiological findings support a phonetic target-oriented theory of speech production. Neural receptors provide essential information regarding the moment-to-moment state of the speech system so that relatively invariant ends can be achieved despite the mechanical phonetic variability characterizing coarticulation. (58 ref.) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Human listeners can keep track of statistical regularities among temporally adjacent elements in both speech and musical streams. However, for speech streams, when statistical regularities occur among nonadjacent elements, only certain types of patterns are acquired. Here, using musical tone sequences, the authors investigate nonadjacent learning. When the elements were all similar in pitch range and timbre, learners acquired moderate regularities among adjacent tones but did not acquire highly consistent regularities among nonadjacent tones. However, when elements differed in pitch range or timbre, learners acquired statistical regularities among the similar, but temporally nonadjacent, elements. Finally, with a moderate grouping cue, both adjacent and nonadjacent statistics were learned, indicating that statistical learning is governed not only by temporal adjacency but also by Gestalt principles of similarity. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
In 3 experiments, the authors examined short-term memory for pitch and duration in unfamiliar tone sequences. Participants were presented a target sequence consisting of 2 tones (Experiment 1) or 7 tones (Experiments 2 and 3) and then a probe tone. Participants indicated whether the probe tone matched 1 of the target tones in both pitch and duration. Error rates were relatively low if the probe tone matched 1 of the target tones or if it differed from target tones in pitch, duration, or both. Error rates were remarkably high, however, if the probe tone combined the pitch of 1 target tone with the duration of a different target tone. The results suggest that illusory conjunctions of these dimensions frequently occur. A mathematical model is presented that accounts for the relative contribution of pitch errors, duration errors, and illusory conjunctions of pitch and duration. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Three experiments examined perceptual interactions between musical pitch and timbre. Exp 1, through the use of the Garner classification tasks, found that pitch and timbre of isolated tones interact. Classification times showed interference from uncorrelated variation in the irrelevant attribute and facilitation from correlated variation; the effects were symmetrical. Exps 2 and 3 examined how musical pitch and timbre function in longer sequences. In recognition memory tasks, a target tone always appeared in a fixed position in the sequences, and listeners were instructed to attend to either its pitch or its timbre. For successive tones, no interactions between timbre and pitch were found. That is, changing the pitches of context tones did not affect timbre recognition, and vice versa. The tendency to perceive pitch in relation to other context pitches was strong and unaffected by whether timbre was constant or varying. In contrast, the relative perception of timbre was weak and was found only when pitch was constant. These results suggest that timbre is perceived more in absolute than in relative terms. Perceptual implications for creating patterns in music with timbre variations are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
The present study was designed to explore serial position and suffix effects in the short-term retention of nonverbal sounds. In contrast with previous studies of these effects, a probe recognition paradigm was used to minimize the possibility that participants would use a verbal labelling strategy. On each trial, participants heard a memory set consisting of three pure tones, followed five seconds later by a probe tone. Participants were required to indicate whether or not the probe tone had been a member of the memory set. On most trials, a suffix sound was presented 1 second following the third sound in the memory set. Results revealed that tones presented in the first and last positions of the memory set were recognized more accurately than were tones presented in the middle position. Furthermore, recognition of sounds presented in the last position was compromised when the memory set was followed by a postlist suffix of similar pitch, spectral composition, and spatial location. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
It is often hypothesized that speech production units are less distinctive in young children and that generalized movement primitives, or templates, serve as a base on which distinctive, mature templates are later elaborated. This hypothesis was examined by analyzing the shape and stability of single close–open speech movements of the lower lip recorded in 4-year-old, 7-year-old, and adult speakers during production of utterances that varied in only a single phoneme. To assess the presence of a generalized template, lower lip movement sequences were time and amplitude normalized, and a pattern recognition procedure was implemented. The findings indicate that speech movements of children already converged on phonetically distinctive patterns by 4 years of age. In contrast, an index of spatiotemporal stability demonstrated that the stability of underlying patterning of the movement sequence improves with maturation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Perceptual evidence suggests that young children do not imitate adult-modeled intonation patterns with a rising pitch contour (rising tones) as well as those with a falling pitch contour (falling tones). To investigate the acoustic basis of this uneven imitation pattern, 10 4-year-old children were asked to imitate short sentences with falling and rising tones in 4 sentence contexts called "intonation groups." The results indicated that the children used more falling tones than adults in most intonation groups. When the children matched the adult-modeled contour direction (falling or rising), the children's speed of pitch change was comparable to that of adults in the falling tones of final intonation groups and in the rising tones of nonfinal groups, but was slower than that of adults in the complementary environments. In a manner consistent with previously reported perceptual data, the instrumental findings indicate that rising tones may be more difficult for 4-year-old children to produce than falling tones. The results additionally suggest that children's intonation is sensitive not only to the direction of tonal contours but also to their position in sentence-final versus nonfinal intonation groups.  相似文献   

15.
Examined the capacity of starlings to perceive and process pitch information in serial acoustic patterns in 7 experiments. Exps I–III studied the discrimination of sequences of tones that rose or fell in pitch when the interval size between successive pitches changed relative to a standard baseline interval. The rising/falling discrimination maintained itself—the birds showed perceptual constancy—when intervals were halved, varied randomly in size, or changed to a continuous frequency sweep. Exps IV–VI examined discrimination performance when pitch contour (i.e., up and down pitch relation from tone to tone) was changed from the baseline patterns. Ss responded preferentially to early rising/falling pitch information on a pitch sequence; they ignored later relative pitch information even when it contradicted early information. However, they could delay pitch processing if initial tones lacked up or down pitch cues. Exp VII was an exploration of the birds' consistent predilection to use absolute frequency value, as opposed to relative pitch, as a discriminative cue in pitch pattern perception. Results suggest a comparison between the cognitive ability of humans and that of animals to process serial acoustic information. (33 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Four experiments are reported that examine Ss' ability to form and use images of tones and chords. In Exps 1 and 3, Ss heard a cue tone or chord and formed an image of a tone or chord one whole step in pitch above the cue. This image was then compared to a probe tone or chord that was either the same as the image in pitch, different from the image in pitch and harmonically closely related, or different and harmonically distantly related. In Exp 3, a random-tone mask was used to control for possible contributions of the cue in echoic memory. In both experiments tone images were formed faster than chord images, a result consistent with the idea of structural complexity as a determinant of image formation time. Response times and accuracy rates were found to parallel results found in music perception studies, results consistent with the idea of shared mechanisms in the processing of musical images and percepts. Exps 2 and 4 were control experiments examining the possible influence of demand characteristics and Ss' knowledge. Findings rule out the possibility that demand characteristics and Ss' knowledge were solely responsible for the results of Exps 1 and 3 and support the role of imagery. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
In Exp 1, definitions of low-frequency words were presented for on-line written recall. Each definition was followed by a nonword speech suffix presented in the same voice as the definition, the same nonword presented in a different voice, or a tone. There was a significant reduction in the recall of the terminal words of the definitions in the speech suffix conditions compared with the tone control. This pattern was replicated in Exp 2, in which Ss did not begin their recall until the suffix item or tone was presented, although the magnitude of the suffix effect was reduced in this experiment. In Exp 3, the suffix effect was considerably reduced compared with the suffix effect found with the definitions presented in Exps 1 and 2. This pattern was replicated in Exp 4, in which Ss did not begin their recall of the story sentences until the speech suffix or tone was presented. Results suggest that auditory memory interference can take place for linguistically coherent speech, although the magnitude of the interference decreases as one increases the level of linguistic structure in the to-be-recalled materials. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
This study investigated whether nonverbal auditory memory representations can be affected by rehearsal strategies. The comparison of the pitches of 2 tones separated by a silent, variable delay interval was examined in 2 experiments, both when participants were instructed to rehearse the pitch of the first tone covertly during the intertone interval and when such rehearsal was prevented by 1 of 2 attention-demanding distractor tasks. In both experiments, delayed tone comparison performance was superior when participants were permitted to rehearse, and the type of distractor task (verbal vs. auditory) had no effect on performance under distraction instructions. The results suggest that auditory imagery can be used strategically to slow the rate of decay of auditory information for tone pitch. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
The study tests the hypothesis of an embodied associative triangle among relative tone pitch (i.e., high or low tones), vertical movement, and facial emotion. In particular, it is tested whether relative pitch automatically activates facial expressions of happiness and anger as well as vertical head movements. Results show robust congruency effects: happiness expressions and upward head tilts are imitated faster when paired with high rather than low tones, while anger expressions and downward head tilts are imitated faster when paired with low rather than high tones. The results add to the growing evidence favoring an embodiment account that emphasizes multimodal representations as the basis of cognition, emotion, and action. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

20.
Two experiments, in which the authors served as Ss, investigated the control of individual speech gestures by examining laryngeal and tongue movements during vowel and consonant production. A number of linguistic manipulations known to alter the durational characteristics of speech (speech rate, lexical stress, and phonemic identity) were tested. In all cases, a consistent pattern was observed in the kinematics of the laryngeal and tongue gestures. The ratio of maximum instantaneous velocity to movement amplitude, a kinematic index of mass-normalized stiffness, increased systematically as movement duration decreased. Specifically, the ratio of maximum velocity to movement amplitude varied as a function of a parameter serving as an index of velocity profile shape times the reciprocal of movement duration. The conformity of the data to this relation indicates that durational change was accomplished by scalar adjustment of a base velocity form. Findings are consistent with the idea that kinematic change is produced by the specification of articulator stiffness. (49 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号