首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
To compare the properties of inner and overt speech, Oppenheim and Dell (2008) counted participants' self-reported speech errors when reciting tongue twisters either overtly or silently and found a bias toward substituting phonemes that resulted in words in both conditions, but a bias toward substituting similar phonemes only when speech was overt. Here, we report 3 experiments revisiting their conclusion that inner speech remains underspecified at the subphonemic level, which they simulated within an activation-feedback framework. In 2 experiments, participants recited tongue twisters that could result in the errorful substitutions of similar or dissimilar phonemes to form real words or nonwords. Both experiments included an auditory masking condition, to gauge the possible impact of loss of auditory feedback on the accuracy of self-reporting of speech errors. In Experiment 1, the stimuli were composed entirely from real words, whereas, in Experiment 2, half the tokens used were nonwords. Although masking did not have any effects, participants were more likely to report substitutions of similar phonemes in both experiments, in inner as well as overt speech. This pattern of results was confirmed in a 3rd experiment using the real-word materials from Oppenheim and Dell (in press). In addition to these findings, a lexical bias effect found in Experiments 1 and 3 disappeared in Experiment 2. Our findings support a view in which plans for inner speech are indeed specified at the feature level, even when there is no intention to articulate words overtly, and in which editing of the plan for errors is implicated. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Reports an error in "Interactive use of lexical information in speech perception" by Cynthia M. Connine and Charles Clifton (Journal of Experimental Psychology: Human Perception and Performance, 1987[May], Vol 13[2], 291-299). In the aforementioned article, Figures 1 and 2 were inadvertently transposed. The figure on p. 294 is actually Figure 2, and the figure on p. 296 is actually Figure 1. The captions are correct as they stand. (The following abstract of the original article appeared in record 1987-23984-001.) Two experiments are reported that demonstrate contextual effects on identification of speech voicing continua. Experiment 1 demonstrated the infuence of lexical knowledge on identification of ambiguous tokens from word–nonword and nonword–word continua. Reaction times for word and nonword responses showed a word advantage only for ambiguous stimulus tokens (at the category boundary); no word advantage was found for clear stimuli (at the continua endpoints). Experiment 2 demonstrated an effect of a postperceptual variable, monetary payoff, on nonword–nonword continua. Identification responses were influenced by monetary payoff, but reaction times for bias-consistent and bias-inconsistent responses did not differ at the category boundary. An advantage for bias-consistent responses was evident at the continua endpoints. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Many studies in bilingual visual word recognition have demonstrated that lexical access is not language selective. However, research on bilingual word recognition in the auditory modality has been scarce, and it has yielded mixed results with regard to the degree of this language nonselectivity. In the present study, we investigated whether listening to a second language (L2) is influenced by knowledge of the native language (L1) and, more important, whether listening to the L1 is also influenced by knowledge of an L2. Additionally, we investigated whether the listener's selectivity of lexical access is influenced by the speaker's L1 (and thus his or her accent). With this aim, Dutch–English bilinguals completed an English (Experiment 1) and a Dutch (Experiment 3) auditory lexical decision task. As a control, the English auditory lexical decision task was also completed by English monolinguals (Experiment 2). Targets were pronounced by a native Dutch speaker with English as the L2 (Experiments 1A, 2A, and 3A) or by a native English speaker with Dutch as the L2 (Experiments 1B, 2B, and 3B). In all experiments, Dutch–English bilinguals recognized interlingual homophones (e.g., lief [sweet]–leaf /li:f/) significantly slower than matched control words, whereas the English monolinguals showed no effect. These results indicate that (a) lexical access in bilingual auditory word recognition is not language selective in L2, nor in L1, and (b) language-specific subphonological cues do not annul cross-lingual interactions. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

4.
When the auditory and visual components of spoken audiovisual nonsense syllables are mismatched, perceivers produce four different types of perceptual responses, auditory correct, visual correct, fusion (the so-called McGurk effect), and combination (i.e., two consonants are reported). Here, quantitative measures were developed to account for the distribution of the four types of perceptual responses to 384 different stimuli from four talkers. The measures included mutual information, correlations, and acoustic measures, all representing audiovisual stimulus relationships. In Experiment 1, open-set perceptual responses were obtained for acoustic /bɑ/ or /lɑ/ dubbed to video /bɑ, dɑ, gɑ, vɑ, zɑ, lɑ, wɑ, eɑ/. The talker, the video syllable, and the acoustic syllable significantly influenced the type of response. In Experiment 2, the best predictors of response category proportions were a subset of the physical stimulus measures, with the variance accounted for in the perceptual response category proportions between 17% and 52%. That audiovisual stimulus relationships can account for perceptual response distributions supports the possibility that internal representations are based on modality-specific stimulus relationships. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

5.
This study examines the extent to which acoustic parameters contribute to lexical effects on the phonetic categorization of speech. Experiment 1 was designed to replicate previous findings. Two test continua were created varying in voice onset time. Results of both identification and reaction time (RT) range data showed an effect of lexical status at the phonetic boundary, but only in the slowest RT ranges, suggesting that lexical effects on phonetic categorization are postperceptual. Experiment 2 explored whether the lexical effect would emerge when the stimulus continua more nearly approximated the parameter values of natural speech. Both identification and RT range data indicated that the lexical effect disappeared. These results suggest that without attention to the acoustic structure of the stimuli, the role of top–down processing in phonetic categorization may be overemphasized. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Although word stress has been hailed as a powerful speech-segmentation cue, the results of 5 cross-modal fragment priming experiments revealed limitations to stress-based segmentation. Specifically, the stress pattern of auditory primes failed to have any effect on the lexical decision latencies to related visual targets. A determining factor was whether the onset of the prime was coarticulated with the preceding speech fragment. Uncoarticulated (i.e., concatenated) primes facilitated priming. Coarticulated ones did not. However, when the primes were presented in a background of noise, the pattern of results reversed, and a strong stress effect emerged: Stress-initial primes caused more pruning than non-initial-stress primes, regardless of the coarticulatory cues. The results underscore the role of coarticulation in the segmentation of clear speech and that of stress in impoverished listening conditions. More generally, they call for an integrated and signal-contingent approach to speech segmentation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
A number of recent studies have questioned the idea that lexical selection during speech production is a competitive process. One type of evidence against selection by competition is the observation that in the picture–word interference task semantically related distractors may facilitate the naming of a picture, whereas the selection by competition account predicts them to interfere. In the experiments reported in this article, the authors systematically varied, for a given type of semantic relation—that is, basic-level distractors (e.g., fish) during subordinate-level naming (e.g., carp)—the modality in which distractor words were presented (auditory vs. visual) and the proportion of response-congruent trials (i.e., trials allowing for the correct naming response to be derived from both the distractor and the target). With auditory distractors, semantic interference was obtained irrespective of the proportion of response-congruent trials (low in Experiment 1, high in Experiment 2). With visual distractors, no semantic effect was obtained with a low proportion of response-congruent trials (Experiment 3), whereas a semantic facilitation effect was obtained with a high proportion of response-congruent trials (Experiment 4). The authors propose that two processes contribute to semantic effects observed in the picture–word interference paradigm, namely selection by competition (leading to interference) and response congruency (leading to facilitation). Whether facilitation due to response congruency overrules the interference effect because of competition depends on the relative strength of these two processes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
9.
Four experiments examined transfer of noncorresponding spatial stimulus-response associations to an auditory Simon task for which stimulus location was irrelevant. Experiment 1 established that, for a horizontal auditory Simon task, transfer of spatial associations occurs after 300 trials of practice with an incompatible mapping of auditory stimuli to keypress responses. Experiments 2-4 examined transfer effects within the auditory modality when the stimuli and responses were varied along vertical and horizontal dimensions. Transfer occurred when the stimuli and responses were arrayed along the same dimension in practice and transfer but not when they were arrayed along orthogonal dimensions. These findings indicate that prior task-defined associations have less influence on the auditory Simon effect than on the visual Simon effect, possibly because of the stronger tendency for an auditory stimulus to activate its corresponding response. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Two experiments examined the impact of a discrepancy in vowel quality between the auditory and visual modalities on the perception of a syllable-initial consonant. One experiment examined the effect of such a discrepancy on the McGurk effect by cross-dubbing auditory /bi/ tokens onto visual /ga/ articulations (and vice versa). A discrepancy in vowel category significantly reduced the magnitude of the McGurk effect and changed the pattern of responses. A 2nd experiment investigated the effect of such a discrepancy on the speeded classification of the initial consonant. Mean reaction times to classify the tokens increased when the vowel information was discrepant between the 2 modalities but not when the vowel information was consistent. These experiments indicate that the perceptual system is sensitive to cross-modal discrepancies in the coarticulatory information between a consonant and its following vowel during phonetic perception. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Five experiments monitored eye movements in phoneme and lexical identification tasks to examine the effect of within-category subphonetic variation on the perception of stop consonants. Experiment 1 demonstrated gradient effects along voice-onset time (VOT) continua made from natural speech, replicating results with synthetic speech (B. McMurray, M. K. Tanenhaus, & R. N. Aslin, 2002). Experiments 2-5 used synthetic VOT continua to examine effects of response alternatives (2 vs. 4), task (lexical vs. phoneme decision), and type of token (word vs. consonant-vowel). A gradient effect of VOT in at least one half of the continuum was observed in all conditions. These results suggest that during online spoken word recognition, lexical competitors are activated in proportion to their continuous distance from a category boundary. This gradient processing may allow listeners to anticipate upcoming acoustic-phonetic information in the speech signal and dynamically compensate for acoustic variability. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Research has shown that speaking in a deliberately clear manner can improve the accuracy of auditory speech recognition. Allowing listeners access to visual speech cues also enhances speech understanding. Whether the nature of information provided by speaking clearly and by using visual speech cues is redundant has not been determined. This study examined how speaking mode (clear vs. conversational) and presentation mode (auditory vs. auditory-visual) influenced the perception of words within nonsense sentences. In Experiment 1, 30 young listeners with normal hearing responded to videotaped stimuli presented audiovisually in the presence of background noise at one of three signal-to-noise ratios. In Experiment 2, 9 participants returned for an additional assessment using auditory-only presentation. Results of these experiments showed significant effects of speaking mode (clear speech was easier to understand than was conversational speech) and presentation mode (auditory-visual presentation led to better performance than did auditory-only presentation). The benefit of clear speech was greater for words occurring in the middle of sentences than for words at either the beginning or end of sentences for both auditory-only and auditory-visual presentation, whereas the greatest benefit from supplying visual cues was for words at the end of sentences spoken both clearly and conversationally. The total benefit from speaking clearly and supplying visual cues was equal to the sum of each of these effects. Overall, the results suggest that speaking clearly and providing visual speech information provide complementary (rather than redundant) information.  相似文献   

13.
Morphological facilitation was examined in immediate (Experiment 1) and long-term (Experiment 2) lexical decision with English materials. For the target (payment), related primes consisted of base-alone (pay), affix-plus-base (prepay), or base-plus-affix (payable) combinations, thereby defining position of overlap. In addition, modality of presentation varied for primes and targets (Experiment 1). At short lags, the advantage for prepay-payment over payable-payment type pairs was significant when primes were visual (V) and targets were auditory (A), marginal under AV conditions, and nonexistent under VV conditions. At long lags, the magnitude of VV did not vary with position of overlap. Morphological facilitation was stable across changes in modality following prefixed and simple forms, reflecting lexical architecture. By contrast, the absence of facilitation following suffixed primes presented cross-modally implicates modality-specific processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
[Correction Notice: An erratum for this article was reported in Vol 13(3) of Journal of Experimental Psychology: Human Perception and Performance (see record 2008-10755-001). In the aforementioned article, Figures 1 and 2 were inadvertently transposed. The figure on p. 294 is actually Figure 2, and the figure on p. 296 is actually Figure 1. The captions are correct as they stand.] Two experiments are reported that demonstrate contextual effects on identification of speech voicing continua. Experiment 1 demonstrated the infuence of lexical knowledge on identification of ambiguous tokens from word–nonword and nonword–word continua. Reaction times for word and nonword responses showed a word advantage only for ambiguous stimulus tokens (at the category boundary); no word advantage was found for clear stimuli (at the continua endpoints). Experiment 2 demonstrated an effect of a postperceptual variable, monetary payoff, on nonword–nonword continua. Identification responses were influenced by monetary payoff, but reaction times for bias-consistent and bias-inconsistent responses did not differ at the category boundary. An advantage for bias-consistent responses was evident at the continua endpoints. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's “woofing”) and spoken words (e.g., /d?g/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

16.
Seeing a talker's face influences auditory speech recognition, but the visible input essential for this influence has yet to be established. Using a new seamless editing technique, the authors examined effects of restricting visible movement to oral or extraoral areas of a talking face. In Experiment 1, visual speech identification and visual influences on identifying auditory speech were compared across displays in which the whole face moved, the oral area moved, or the extraoral area moved. Visual speech influences on auditory speech recognition were substantial and unchanging across whole-face and oral-movement displays. However, extraoral movement also influenced identification of visual and audiovisual speech. Experiments 2 and 3 demonstrated that these results are dependent on intact and upright facial contexts, but only with extraoral movement displays. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Speech remains intelligible despite the elimination of canonical acoustic correlates of phonemes from the spectrum. A portion of this perceptual flexibility can be attributed to modulation sensitivity in the auditory-to-phonetic projection, although signal-independent properties of lexical neighborhoods also affect intelligibility in utterances composed of words. Three tests were conducted to estimate the effects of exposure to natural and sine-wave samples of speech in this kind of perceptual versatility. First, sine-wave versions of the easy and hard word sets were created, modeled on the speech samples of a single talker. The performance difference in recognition of easy and hard words was used to index the perceptual reliance on signal-independent properties of lexical contrasts. Second, several kinds of exposure produced familiarity with an aspect of sine-wave speech: (a) sine-wave sentences modeled on the same talker; (b) sine-wave sentences modeled on a different talker, to create familiarity with a sine-wave carrier; and (c) natural sentences spoken by the same talker, to create familiarity with the idiolect expressed in the sine-wave words. Recognition performance with both easy and hard sine-wave words improved after exposure only to sine-wave sentences modeled on the same talker. Third, a control test showed that signal-independent uncertainty is a plausible cause of differences in recognition of easy and hard sine-wave words. The conditions of beneficial exposure reveal the specificity of attention underlying versatility in speech perception. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

18.
A central question in psycholinguistic research is how listeners isolate words from connected speech despite the paucity of clear word-boundary cues in the signal. A large body of empirical evidence indicates that word segmentation is promoted by both lexical (knowledge-derived) and sublexical (signal-derived) cues. However, an account of how these cues operate in combination or in conflict is lacking. The present study fills this gap by assessing speech segmentation when cues are systematically pitted against each other. The results demonstrate that listeners do not assign the same power to all segmentation cues; rather, cues are hierarchically integrated, with descending weights allocated to lexical, segmental, and prosodic cues. Lower level cues drive segmentation when the interpretive conditions are altered by a lack of contextual and lexical information or by white noise. Taken together, the results call for an integrated, hierarchical, and signal-contingent approach to speech segmentation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
The influence of addition and deletion neighbors on visual word identification was investigated in four experiments. Experiments 1 and 2 used Spanish stimuli. In Experiment 1, lexical decision latencies were slower and less accurate for words and nonwords with higher-frequency deletion neighbors (e.g., jugar in juzgar), relative to control stimuli. Experiment 2 showed a similar interference effect for words and nonwords with higher-frequency addition neighbors (e.g., conejo, which has the addition neighbor consejo), relative to control stimuli. Experiment 3 replicated this addition neighbor interference effect in a lexical decision experiment with English stimuli. Across all three experiments, interference effects were always evident for addition/deletion neighbors with word-outer overlap, usually present for those with word-initial overlap, but never present for those with word-final overlap. Experiment 4 replicated the addition/deletion neighbor inhibitory effects in a Spanish sentence reading task in which the participants’ eye movements were monitored. These findings suggest that conventional orthographic neighborhood metrics should be redefined. In addition to its methodological implications, this conclusion has significant theoretical implications for input coding schemes and the mechanisms underlying word recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Four signal-detection experiments demonstrated robust stimulus-driven, or exogenous, attentional processes in selective frequency listening. Detection of just-above-threshold signal tones was consistently better when the signal matched the frequency of an uninformative cue tone, even with relatively long cue-signal delays (Experiment 1) or when as few as 1 in 8 signals were at the cued frequency (Experiment 2). Experiments 3 and 4 compared performance with informative and uninformative cues. The involvement of intentional, or endogenous, processes was found to only slightly increase the size of the cuing effect beyond that evident with solely exogenous processes, although the attention band, a measure of how narrowly attention is focused, was found to be wider when cues were informative. The implications for models of auditory attention are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号