首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 272 毫秒
1.
Infants' long-term memory for the phonological patterns of words versus the indexical properties of talkers' voices was examined in 3 experiments using the Headturn Preference Procedure (D. G. Kemler Nelson et al., 1995). Infants were familiarized with repetitions of 2 words and tested on the next day for their orientation times to 4 passages--2 of which included the familiarized words. At 7.5 months of age, infants oriented longer to passages containing familiarized words when these were produced by the original talker. At 7.5 and 10.5 months of age, infants did not recognize words in passages produced by a novel female talker. In contrast, 7.5-month-olds demonstrated word recognition in both talker conditions when presented with passages produced by both the original and the novel talker. The findings suggest that talker-specific information can prime infants' memory for words and facilitate word recognition across talkers. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Recognition memory for spoken words was investigated with a continuous recognition memory task. Independent variables were number of intervening words (lag) between initial and subsequent presentations of a word, total number of talkers in the stimulus set, and whether words were repeated in the same voice or a different voice. In Exp 1, recognition judgments were based on word identity alone. Same-voice repetitions were recognized more quickly and accurately than different-voice repetitions at all values of lag and at all levels of talker variability. In Exp 2, recognition judgments were based on both word identity and voice identity. Ss recognized repeated voices quite accurately. Gender of the talker affected voice recognition but not item recognition. These results suggest that detailed information about a talker's voice is retained in long-term memory representations of spoken words. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Three experiments were conducted to investigate recall of lists of words containing items spoken by either a single talker or by different talkers. In each experiment, recall of early list items was better for lists spoken by a single talker than for lists of the same words spoken by different talkers. The use of a memory preload procedure demonstrated that recall of visually presented preload digits was superior when the words in a subsequent list were spoken by a single talker than by different talkers. In addition, a retroactive interference task demonstrated that the effects of talker variability on the recall of early list items were not due to use of talker-specific acoustic cues in working memory at the time of recall. Taken together, the results suggest that word lists produced by different talkers require more processing resources in working memory than do lists produced by a single talker. The findings are discussed in terms of the role that active rehearsal plays in the transfer of spoken items into long-term memory and the factors that may affect the efficiency of rehearsal. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
In a series of experiments, the authors investigated the effects of talker variability on children's word recognition. In Experiment 1, when stimuli were presented in the clear, 3- and 5-year-olds were less accurate at identifying words spoken by multiple talkers than those spoken by a single talker when the multiple-talker list was presented first. In Experiment 2, when words were presented in noise. 3-, 4-, and 5-year-olds again performed worse in the multiple-talker condition than in the single-talker condition, this time regardless of order; processing multiple talkers became easier with age. Experiment 3 showed that both children and adults were slower to repeat words from multiple-talker than those from single-talker lists. More important, children (but not adults) matched acoustic properties of the stimuli (specifically, duration). These results provide important new information about the development of talker normalization in speech perception and spoken word recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
6.
A series of experiments was conducted to determine if linguistic representations accessed during reading include auditory imagery for characteristics of a talker's voice. In 3 experiments, participants were familiarized with two talkers during a brief prerecorded conversation. One talker spoke at a fast speaking rate, and one spoke at a slow speaking rate. Each talker was identified by name. At test, participants were asked to either read aloud (Experiment 1) or silently (Experiments 1, 2, and 3) a passage that they were told was written by either the fast or the slow talker. Reading times, both silent and aloud, were significantly slower when participants thought they were reading a passage written by the slow talker than when reading a passage written by the fast talker. Reading times differed as a function of passage author more for difficult than for easy texts, and individual differences in general auditory imagery ability were related to reading times. These results suggest that readers engage in a type of auditory imagery while reading that preserves the perceptual details of an author's voice. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
In two experiments, English-Spanish bilinguals read passages, performing letter detection on some passages by circling target letters as they read. Detection passages were sometimes familiarized (primed) by prior reading of the same passage or a translation of it. Participants detected letters in English passages in Experiment 1 and in Spanish passages in Experiment 2. For both experiments, a missing letter effect occurred (depressed detection accuracy on frequent function words relative to less frequent content words). Familiarization promoted overall improvements in letter detection only for English passages, suggesting that reprocessing benefits depend on high language fluency. For Spanish passages, cognates engendered greater error rates than noncognates; the visual similarity of Spanish and English cognates apparently enabled faster identification of Spanish cognates in a way unaffected by familiarization of the whole text passage. Priming by familiarized text was significantly higher when the passages were in the same language than when they were in different languages, suggesting that the reprocessing benefits are at the word level instead of the semantic level. These results are consistent with the GO model of reading (Greenberg, Healy, Koriat, & Kreiner, 2004) but require an expanded consideration of attention redistribution processes in that model. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
This study examined infants' abilities to separate speech from different talkers and to recognize a familiar word (the infant's own name) in the context of noise. In 4 experiments, infants heard repetitions of either their names or unfamiliar names in the presence of background babble. Five-month-old infants listened longer to their names when the target voice was 10 dB, but not 5 dB, more intense than the background. Nine-month-olds likewise failed to identify their names at a 5-dB signal-to-noise ratio, but 13-month-olds succeeded. Thus, by 5 months, infants possess some capacity to selectively attend to an interesting voice in the context of competing distractor voices. However, this ability is quite limited and develops further when infants near 1 year of age. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Two talkers' productions of the same phoneme may be quite different acoustically, whereas their productions of different speech sounds may be virtually identical. Despite this lack of invariance in the relationship between the speech signal and linguistic categories, listeners experience phonetic constancy across a wide range of talkers, speaking styles, linguistic contexts, and acoustic environments. The authors present evidence that perceptual sensitivity to talker variability involves an active cognitive mechanism: Listeners expecting to hear 2 different talkers differing only slightly in average pitch showed performance costs typical of adjusting to talker variability, whereas listeners hearing the same materials but expecting a single talker or given no special instructions did not show these performance costs. The authors discuss the implications for understanding phonetic constancy despite variability between talkers (and other sources of variability) and for theories of speech perception. The results provide further evidence for active, controlled processing in real-time speech perception and are consistent with a model of talker normalization that involves contextual tuning. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
This study examined 2- to 3-month-olds' representations of bisyllables. In 3 experiments, infants were familiarized with sets of bisyllables that either did or did not share a common consonant–vowel (CV) syllable. In Experiment 1, infants detected the presence of a new bisyllable in the test phase except when it shared a common initial CV syllable. A modified version of the high-amplitude sucking procedure, incorporating a 2-min delay period, tested infants' retention of information about bisyllables in the remaining 2 experiments. In Experiment 2, infants were significantly more likely to retain information about bisyllables that shared the same initial CV syllable. Finally, the authors investigated whether infants simply benefited from the presence of 2 common phonetic segments, regardless of whether these came from the same CV syllable. The results showed that CV syllable organization is important in infants' ability to encode and retain information about bisyllables. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Prior research suggests that stress cues are particularly important for English-hearing infants' detection of word boundaries. It is unclear, though, how infants learn to attend to stress as a cue to word segmentation. This series of experiments was designed to explore infants' attention to conflicting cues at different ages. Experiment 1 replicated previous findings: When stress and statistical cues indicated different word boundaries, 9-month-old infants used syllable stress as a cue to segmentation while ignoring statistical cues. However, in Experiment 2, 7-month-old infants attended more to statistical cues than to stress cues. These results raise the possibility that infants use their statistical learning abilities to locate words in speech and use those words to discover the regular pattern of stress cues in English. Infants at different ages may deploy different segmentation strategies as a function of their current linguistic experience. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
In this study, 1.5-year-olds were taught a novel word. Some children were familiarized with the word's phonological form before learning the word's meaning. Fidelity of phonological encoding was tested in a picture-fixation task using correctly pronounced and mispronounced stimuli. Only children with additional exposure in familiarization showed reduced recognition performance given slight mispronunciations relative to correct pronunciations; children with fewer exposures did not. Mathematical modeling of vocabulary exposure indicated that children may hear thousands of words frequently enough for accurate encoding. The results provide evidence compatible with partial failure of phonological encoding at 19 months of age, demonstrate that this limitation in learning does not always hinder word recognition, and show the value of infants' word-form encoding in early lexical development. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
The contribution of reduced speaking rate to the intelligibility of "clear" speech (Picheny, Durlach, & Braida, 1985) was evaluated by adjusting the durations of speech segments (a) via nonuniform signal time-scaling, (b) by deleting and inserting pauses, and (c) by eliciting materials from a professional speaker at a wide range of speaking rates. Key words in clearly spoken nonsense sentences were substantially more intelligible than those spoken conversationally (15 points) when presented in quiet for listeners with sensorineural impairments and when presented in a noise background to listeners with normal hearing. Repeated presentation of conversational materials also improved scores (6 points). However, degradations introduced by segment-by-segment time-scaling rendered this time-scaling technique problematic as a means of converting speaking styles. Scores for key words excised from these materials and presented in isolation generally exhibited the same trends as in sentence contexts. Manipulation of pause structure reduced scores both when additional pauses were introduced into conversational sentences and when pauses were deleted from clear sentences. Key-word scores for materials produced by a professional talker were inversely correlated with speaking rate, but conversational rate scores did not approach those of clear speech for other talkers. In all experiments, listeners with normal hearing exposed to flat-spectrum background noise performed similarly to listeners with hearing loss.  相似文献   

14.
Using a habituation/test procedure, the author investigated adults' and infants' perception of auditory–visual temporal synchrony. Participants were familiarized with a bouncing green disk and a sound that occurred each time the disk bounced. Then, they were given a series of asynchrony test trials where the sound occurred either before or after the disk bounced. The magnitude of the auditory visual temporal asynchrony threshold differed markedly in adults and infants. The threshold for the detection of asynchrony created by a sound preceding a visible event was 65 ms in adults and 350 ms in infants and for the detection of asynchrony created by a sound following a visible event was 112 ms in adults and 450 ms in infants. Also, infants did not respond to asynchronies that exceeded intervals that yielded reliable discrimination. Infants' perception of auditory–visual temporal unity is guided by a synchrony and an asynchrony window, both of which become narrower in development. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

15.
Eight experiments tested the hypothesis that infants' word segmentation abilities are reducible to familiar sound-pattern parsing regardless of actual word boundaries. This hypothesis was disconfirmed. in experiments using the headturn preference procedure: 8.5-month-olds did not mis-segment a consonantvowel- consonant (CVQ word (e.g., dice) from passages containing the corresponding phonemic pattern across a word boundary (C#VC#; "cold ice"), but they segmented it when the word was really present ("roll dice"). However, they did not segment the real vowel-consonant (VC) word (ice in "cold ice") until 16 months. Yet, at that age, they still did not false alarm on the straddling CVC word. Thus, infants do not simply respond to recurring phonemic patterns. Instead, they are sensitive to both acoustic and allophonic cues to word boundaries. Moreover, there is a sizable developmental gap between consonant and vowel-initial word segmentation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
In 3 experiments, the authors used an object-examining task to investigate the role of perceptual similarity in infants' categorization. In Experiment 1, infants were familiarized with a set of either perceptually similar or perceptually variable exemplars from 1 category and tested with novel exemplars from both categories. Ten-month-olds did not respond to the category in either condition, and 13-month-olds responded categorically in both conditions but somewhat differently in the 2 conditions. Experiment 2 showed that when 10-month-olds were familiarized with similar exemplars but not with variable exemplars, they responded to the categorical distinction when given tests with typical exemplars. Experiment 3 established that 10-month-olds could differentiate among the exemplars. These results suggest that the perceptual similarity of the exemplars influences infants' recognition of categorical distinctions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Newborn infants were exposed to speech sounds in 2 sessions separated by 24 hrs. Their habituation and recovery to these sounds were assessed by spontaneous head orienting toward the sound's location. 36 neonates were assigned to 1 of 3 groups: a no-change group that heard the same word both days, a change group that heard a different word each day, and a Day 2 age-control group that heard 1 of the 2 words for the 1st time on Day 2. Both groups habituated to the sound on Day 1 and recovered head turning after the 24-hr delay, but infants who heard the same word again responded less than did age controls. In addition, these infants also began turning away from the sound, unlike the other 2 groups. Following habituation, all groups displayed comparable levels of recovery by turning toward a novel posttest stimulus. Neonates appeared to retain memory for a specific sound over a 24-hr period when presented with the same sound over both days. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Traditional word-recognition tests typically use phonetically balanced (PB) word lists produced by one talker at one speaking rate. Intelligibility measures based on these tests may not adequately evaluate the perceptual processes used to perceive speech under more natural listening conditions involving many sources of stimulus variability. The purpose of this study was to examine the influence of stimulus variability and lexical difficulty on the speech-perception abilities of 17 adults with mild-to-moderate hearing loss. The effects of stimulus variability were studied by comparing word-identification performance in single-talker versus multiple-talker conditions and at different speaking rates. Lexical difficulty was assessed by comparing recognition of "easy" words (i.e., words that occur frequently and have few phonemically similar neighbors) with "hard" words (i.e., words that occur infrequently and have many similar neighbors). Subjects also completed a 20-item questionnaire to rate their speech understanding abilities in daily listening situations. Both sources of stimulus variability produced significant effects on speech intelligibility. Identification scores were poorer in the multiple-talker condition than in the single-talker condition, and word-recognition performance decreased as speaking rate increased. Lexical effects on speech intelligibility were also observed. Word-recognition performance was significantly higher for lexically easy words than lexically hard words. Finally, word-recognition performance was correlated with scores on the self-report questionnaire rating speech understanding under natural listening conditions. The pattern of results suggest that perceptually robust speech-discrimination tests are able to assess several underlying aspects of speech perception in the laboratory and clinic that appear to generalize to conditions encountered in natural listening situations where the listener is faced with many different sources of stimulus variability. That is, word-recognition performance measured under conditions where the talker varied from trial to trial was better correlated with self-reports of listening ability than was performance in a single-talker condition where variability was constrained.  相似文献   

19.
Cooperative conversation has been shown to foster interpersonal postural coordination. The authors investigated whether such coordination is mediated by the influence of articulation on postural sway. In Experiment 1, talkers produced words in synchrony or in alternation, as the authors varied speaking rate and word similarity. Greater shared postural activity was found for the faster speaking rate. In Experiment 2, the authors demonstrated that shared postural activity also increases when individuals speak the same words or speak words that have similar stress patterns. However, this increase in shared postural activity is present only when participants' data are compared with those of their partner, who was present during the task, but not when compared with the data of a member of a different pair speaking the same word sequences as those of the original partner. The authors' findings suggest that interpersonal postural coordination observed during conversation is mediated by convergent speaking patterns. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
In this study we examined developmental change in category representation in the first year. In Experiment 1, we tested one hundred eight 3-, 5-, and 7-month old infants in a visual recognition memory procedure. Infants were familiarized with category instances derived from one of six prototypical dot patterns differing in "goodness of form." The results of between- and within-category test comparisons indicated change in the nature but not the structure of infant form categories. Although infants of all ages demonstrated the ability to detect regularity across pattern variation, there were systematic changes with age in the kinds of regularities that infants were able to detect and represent. For all categories formed, however, infants' category representations conformed to a prototypical structure regardless of age. A second experiment ruled out a priori preferences as the basis for the pattern of results observed in Experiment 1. The observed changes in infant categorization are discussed in relation to pattern symmetry. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号