共查询到20条相似文献,搜索用时 15 毫秒
1.
When more than one retrieval cue is available, the cues may combine their effects independently or interactively and facilitate the retrieval process. The predictions of 2 versions of an independent model and 1 interactive model were tested in a situation that required the retrieval of item and category information. Ss were 72 undergraduates. In Exp I, the 2 sources of information were redundant in that the retrieval of either item or category information was sufficient to determine the correct response. In Exp II, item and category information were not redundant, and the evaluation of both types of information was necessary. Results favor the interactive model of retrieval. When presented with a test item, Ss attempted to retrieve both semantic category and item information, and partial information about either was used to aid retrieval of the other. (French abstract) (16 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
2.
The reported research investigates how listeners recognize coarticulated phonemes. First, 2 data sets from experiments on the recognition of coarticulated phonemes published by D. H. Whalen (1989) are reanalyzed. The analyses indicate that listeners used categorization strategies involving a hierarchical dependency. Two new experiments are reported investigating the production and perception of fricative–vowel syllables. On the basis of measurements of acoustic cues on a large set of natural utterances, it was predicted that listeners would use categorization strategies involving a dependency of the fricative categorization on the perceived vowel. The predictions were tested in a perception experiment using a 2-dimensional synthetic fricative–vowel continuum. Model analyses of the results pooled across listeners confirmed the predictions. Individual analyses revealed some variability in the categorization dependencies used by different participants. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
3.
An interactive activation model of context effects in letter perception: I. An account of basic findings. 总被引:1,自引:0,他引:1
Describes a model in which perception results from excitatory and inhibitory interactions of detectors for visual features, letters, and words. A visual input excites detectors for visual features in the display and for letters consistent with the active features. Letter detectors in turn excite detectors for consistent words. It is suggested that active word detectors mutually inhibit each other and send feedback to the letter level, strengthening activation and hence perceptibility of their constituent letters. Computer simulation of the model exhibits the perceptual advantage for letters in words over unrelated contexts and is considered consistent with basic facts about word advantage. Most important, the model produces facilitation for letters in pronounceable pseudowords as well as words. Pseudowords activate detectors for words that are consistent with most active letters, and feedback from the activated words strengthens activations of the letters in the pseudoword. The model thus accounts for apparently rule-governed performance without any actual rules. (50 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
4.
In speech perception, phonetic information can be acquired optically as well as acoustically. The motor theory of speech perception holds that motor control structures are involved in the processing of visible speech, whereas perceptual accounts do not make this assumption. Motor involvement in speech perception was examined by showing participants response-irrelevant movies of a mouth articulating /bΛ/ or /dΛ/ and asking them to verbally respond with either the same or a different syllable. The letters "Ba" and "Da" appeared on the speaker's mouth to indicate which response was to be performed. A reliable interference effect was observed. In subsequent experiments, perceptual interference was ruled out by using response-unrelated imperative stimuli and by preexposing the relevant stimulus information. Further, it was demonstrated that simple directional features (opening and closing) do not account for the effect. Rather, the present study provides evidence for the view that visible speech is processed up to a late, response-related processing stage, as predicted by the motor theory of speech perception. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
5.
How word production unfolds remains controversial. Serial models posit that phonological encoding begins only after lexical node selection, whereas cascade models hold that it can occur before selection. Both models were evaluated by testing whether unselected lexical nodes influence phonological encoding in the picture-picture interference paradigm. English speakers were shown pairs of superimposed pictures and were instructed to name one picture and ignore another. Naming was faster when target pictures were paired with phonologically related (bed-bell) than with unrelated (bed-pin) distractors. This suggests that the unspoken distractors exerted a phonological influence on production. This finding is inconsistent with serial models but in line with cascade ones. The facilitation effect was not replicated in Italian with the same pictures, supporting the view that the effect found in English was caused by the phonological properties of the stimuli. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
6.
Jescheniak J?rg D.; Hahne Anja; Hoffmann Stefanie; Wagner Valentin 《Canadian Metallurgical Quarterly》2006,32(2):373
There is a long-standing debate in the area of speech production on the question of whether only words selected for articulation are phonologically activated (as maintained by serial-discrete models) or whether this is also true for their semantic competitors (as maintained by forward-cascading and interactive models). Past research has addressed this issue by testing whether retrieval of a target word (e.g., cat) affects--or is affected by--the processing of a word that is phonologically related to a semantic category coordinate of the target (e.g., doll, related to dog) and has consistently failed to obtain such mediated effects in adult speakers. The authors present a series of experiments demonstrating that mediated effects are present in children (around age 7) and diminish with increasing age. This observation provides further evidence for cascaded models of lexical retrieval. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
7.
Previous work (Tuller, Case, Ding, & Kelso, 1994) has revealed signature properties of nonlinear dynamical systems in how people categorize speech sounds. The data were modeled by using a two-well potential function that deformed with stimulus properties and was sensitive to context. Here we evaluate one prediction of the model--namely, that the rate of change of the potential's slope should increase when the category is repeatedly perceived. Judged goodness of category membership was used as an index of the slope of the potential. Stimuli from a "say"-"stay" continuum were presented with gap duration changing sequentially throughout the range from 0 to 76 to 0 msec, or from 76 to 0 to 76 msec. Subjects identified each token as either "say" or "stay" and rated how good an exemplar it was of the identified category. As predicted, the same physical stimulus presented at the end of a sequence was judged a better exemplar of the category than was the identical stimulus presented at the beginning of the sequence. In contrast, stimuli presented twice near the middle of a sequence with few (or no) stimuli between them, as well as stimuli presented with an intervening random set, showed no such differences. These results confirm the hypothesis of a context-sensitive dynamical representation underlying speech. 相似文献
8.
E Vatikiotis-Bateson IM Eigsti S Yano KG Munhall 《Canadian Metallurgical Quarterly》1998,60(6):926-940
Perceiver eye movements were recorded during audiovisual presentations of extended monologues. Monologues were presented at different image sizes and with different levels of acoustic masking noise. Two clear targets of gaze fixation were identified, the eyes and the mouth. Regardless of image size, perceivers of both Japanese and English gazed more at the mouth as masking noise levels increased. However, even at the highest noise levels and largest image sizes, subjects gazed at the mouth only about half the time. For the eye target, perceivers typically gazed at one eye more than the other, and the tendency became stronger at higher noise levels. English perceivers displayed more variety of gaze-sequence patterns (e.g., left eye to mouth to left eye to right eye) and persisted in using them at higher noise levels than did Japanese perceivers. No segment-level correlations were found between perceiver eye motions and phoneme identity of the stimuli. 相似文献
9.
Connectionist models of perception and cognition, including the process of deducing meaningful messages from patterns of acoustic waves emitted by vocal tracts, are developed and refined as human understanding of brain function, psychological processes, and the properties of massively parallel architectures advances. The present article presents several important contributions from diverse points of view in the area of connectionist modeling of speech perception and discusses their relative merits with respect to specific theoretical issues and empirical findings. TRACE, the Elman/Norris net, and Adaptive Resonance Theory constitute pivotal points exemplifying overall modeling success, progress in temporal representation, and plausible modeling of learning, respectively. Other modeling efforts are presented for the specific insights they offer, and the article concludes with a discussion of computational versus dynamic modeling of phonological processes. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
10.
In 5 picture-word interference experiments the activation of word class information was investigated. The first experiment, in which subjects used bare nouns to describe the pictures, failed to reveal any interference effect of noun distractor words as opposed to closed-class distractor words. In the next 4 experiments the pictures were named by using a definite determiner and the noun completing a sentence fragment. The data demonstrate that noun distractors interfere more strongly with picture naming than do non-noun distractors. This held for both visual and auditory presentation of the distractor words. The interference effect showed up in a time window where semantic interference can usually be observed, supporting the assumption that at an early stage of lexical access semantic and syntactic activation processes overlap. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
11.
Burton A. Mike; Bindemann Markus; Langton Stephen R. H.; Schweinberger Stefan R.; Jenkins Rob 《Canadian Metallurgical Quarterly》2009,35(1):108
The direction of another person's gaze is difficult to ignore when presented at the center of attention. In 6 experiments, perception of unattended gaze was investigated. Participants made directional (left-right) judgments to gazing-face or pointing-hand targets, which were accompanied by a distractor face or hand. Processing of the distractor was assessed via congruency effects on target response times. Congruency effects were found from the direction of distractor hands but not from the direction of distractor gazes (Experiment 1). This pattern persisted even when distractor sizes were increased to compensate for their peripheral presentation (Experiments 2 and 5). In contrast, congruency effects were exerted by profile heads (Experiments 3 and 4). In Experiment 6, isolated eye region distractors produced no congruency effects, even when they were presented near the target. These results suggest that, unlike other facial information, gaze direction cannot be perceived outside the focus of attention. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
12.
Three experiments investigated perception of audio-visual (A-V) speech synchrony in 4- to 10-month-old infants. Experiments 1 and 2 used a convergent-operations approach by habituating infants to an audiovisually synchronous syllable (Experiment 1) and then testing for detection of increasing degrees of A-V asynchrony (366, 500, and 666 ms) or by habituating infants to a detectably asynchronous syllable (666 ms; Experiment 2) and then testing for detection of decreasing degrees of asynchrony (500, 366, and 0 ms). Following habituation to the synchronous syllable, infants detected only the largest A-V asynchrony (0 ms vs. 666 ms), whereas following habituation to the asynchronous syllable, infants detected the largest asynchrony (666 ms vs. 0 ms) as well as a smaller one (666 ms vs. 366 ms). Experiment 3 investigated the underlying mechanism of A-V asynchrony detection and indicated that responsiveness was based on a sensitivity to stimulus-energy onsets rather than the dynamic correlation between acoustic and visible utterance attributes. These findings demonstrated that infant perception of A-V speech synchrony is subject to the effects of short-term experience and that it is driven by a low-level, domain-general mechanism. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
13.
Two experiments are reported examining how value and relatedness interact to influence metacognitive monitoring and control processes. Participants studied unrelated and related word pairs, each accompanied by point values denoting how important the items were to remember. These values were presented either before or after each pair in a between-subjects design, and participants made item-by-item judgments of learning (JOLs) predicting the likelihood that each item would be remembered later. Results from Experiment 1 showed that participants used value and relatedness as cues to inform their JOLs. Interestingly, JOLs increased as a function of value even in the after condition in which value had no impact on cued recall. Participants in Experiment 2 were permitted to control study time for each item. Results showed that value and relatedness were simultaneously considered when allocating study time. These results support a cue-weighting process in which JOLs and study time allocation are based on multiple cues, which may or may not be predictive of future memory performance, and complements the agenda-based regulation model of study time (Ariel, Dunlosky, & Bailey, 2009) by providing evidence for agenda-based monitoring. (PsycINFO Database Record (c) 2011 APA, all rights reserved) 相似文献
14.
This article reports three studies designed to increase our understanding of developmental changes in cross-language speech perception. In the first study, we compared adult speakers of English and Hindi on their ability to discriminate pairings from a synthetic voiced, unaspirated place-of-articulation continuum. Results indicated that English listeners discriminate two categories (ba vs. da), whereas Hindi listeners discriminate three (ba vs. da, and da vs. DA). We then used stimuli from within this continuum in the next two experiments to determine (a) if our previously reported finding (Werker & Tees, 1984a) of a reorganization between 6 and 12 months of life from "universal" to "language-specific" phonetic perception would be evident using synthetic (rather than natural) stimuli in which the physical variability within and between categories could be controlled, and (b) whether the younger infants' sensitivity to nonnative speech contrasts is best explained by reference to the phonetic relevance or the physical similarity of the stimuli. In addition to replicating the developmental reorganization, the results indicate that infant speech perception is phonetically relevant. We discuss the implications of these results. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
15.
Transference is a key concept in psychoanalysis, distinguishing the analytic treatment from other forms of psychotherapy. In this essay, the authors place transference into the context of a general psychology of human functioning and link it to the neurobiology of perception. The authors briefly review the literature within and outside of psychoanalysis, define transference through the lens of perception, and propose that it is ubiquitous in humans. When not impaired, transference is an adaptive ego function that emerges, along with countertransference, in the context of any interpersonal situation of significant emotional import. The authors draw on W. Freeman's (2003, 2004) research on olfaction, which has since been replicated in other sensory modalities, for a neurodynamic basis for their model of perception and describe how transference may be thought of as an evolved form of it. The authors' view is that transference is a hierarchically integrated perceptual modality of a higher order, although it depends on the same neurodynamic processes as those found in each sensory modality. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
16.
Used the interaural transfer technique in 2 experiments with 19 Ss (aged 18–45 yrs) and 9 Ss (aged 19–34 yrs), respectively, to determine the relative locus of selective adaptation in speech perception. Findings show that voiced (/ba/ or /da/) and voiceless (/pa/ or /ta/) consonants seemed to affect different auditory system loci. On a voice-onset-time continuum (/ba/ to /pa/ or /da/ to /ta/) the selective adaptation effects produced by voiceless consonants were largely ear independent and endured over delays of at least 1 min. However, voiced adapters produced selective adaptation effects that were highly ear specific and relatively short lived ( 相似文献
17.
Cross-modal priming experiments have shown that surface variations in speech are perceptually tolerated as long as they occur in phonologically viable contexts. For example, [fre[i]p] (frayp) gains access to the mental representation of freight when in the context of [fre[i]pbeara] (frayp bearer) because the change occurs in normal speech as a process of place assimilation. The locus of these effects in the perceptual system was examined. Sentences containing surface changes were created that either agreed with or violated assimilation rules. The lexical status of the assimilated word also was manipulated, contrasting lexical and nonlexical accounts. Two phoneme monitoring experiments showed strong effects of phonological viability for words, with weaker effects for nonwords. It is argued that the listener's percept of the form of speech is a product of a phonological inference process that recovers the underlying form of speech. This process can operate on both words and nonwords, although it interacts with the retrieval of lexical information. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
18.
The literature on the role of infant–adult comparisons in developmental accounts of speech perception is reviewed, and methodological problems associated with such comparisons are delineated. It is argued that the data that are appropriate for the evaluation of categorical perception in infancy are unavailable. Moreover, the view that language experience operates to eliminate discriminative abilities once present rather than to add abilities once absent is without clear-cut support. The serious confounding of age and method of testing casts doubt on current developmental accounts of speech perception based on comparisons of infant and adult. (French abstract) (87 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
19.
Isolated kinematic properties of visible speech can provide information for lip reading. Kinematic facial information is isolated by darkening an actor's face and attaching dots to various articulators so that only moving dots can be seen with no facial features present. To test the salience of these images, the authors conducted experiments to determine whether the images could visually influence the perception of discrepant auditory syllables. Results showed that these images can influence auditory speech independently of the participant's knowledge of the stimuli. In other experiments, single frozen frames of visible syllables were presented with discrepant auditory syllables to test the salience of static facial features. Although the influence of the kinematic stimuli was perceptual, any influence of the static featural stimuli was likely based on participant's misunderstanding or postperceptual response bias. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献