首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
What is the role of long-term memories of previous stimulus-response mappings, and of previous sensory and perceptual experiences generally, in psychophysical scaling judgments? In each of 4 experiments, subjects made judgments of the loudness of sounds on 3 successive days. Stimulus intensities were drawn randomly from the same set on Days 1 and 3 but from a different set on Day 2. Four different types of psychophysical scaling judgments were studied. The first two methods required completely relative judgment, the last two completely absolute judgment. Data from all methods reveal profound effects of stimulus-response mappings experienced on previous days and immediately previous stimuli and responses on responses to current stimuli. Responses were typically a compromise between absolute and relative judgment. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Examined the effects of varying detail on memory. In Exp I, pictorial embellishment was varied by presenting 27 Ss aged 60+ yrs and 30 undergraduates with normal photographs, high-contrast photographs, or line drawings, and testing their memory immediately and 4 wks later. All of the Ss did best with the most elaborate pictures (normal photographs), and old Ss remembered as well as young at the immediate but not at the delayed interval. In Exp II, with 21 old Ss and 21 18–36 yr olds, detail was varied by adding background to line drawings of a central object. Ss of both ages profited from enhanced background detail, and there were no differences in memory as a function of age. Exp III replicated Exp II, except that Ss (10 elderly and 17 college students) studied the pictures under divided attention conditions. Again, Ss of both ages recognized elaborate pictures best, and no significant age differences emerged. Results suggest that old and young adults profit from visual embellishment and that memory for meaningful pictures remains relatively intact with age. (14 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
The effect of context on the identification of common environmental sounds (e.g., dogs barking or cars honking) was tested by embedding them in familiar auditory background scenes (street ambience, restaurants). Initial results with subjects trained on both the scenes and the sounds to be identified showed a significant advantage of about five percentage points better accuracy for sounds that were contextually incongruous with the background scene (e.g., a rooster crowing in a hospital). Further studies with naive (untrained) listeners showed that this incongruency advantage (IA) is level-dependent: there is no advantage for incongruent sounds lower than a Sound/Scene ratio (So/Sc) of ?7.5 dB, but there is about five percentage points better accuracy for sounds with greater So/Sc. Testing a new group of trained listeners on a larger corpus of sounds and scenes showed that the effect is robust and not confined to a specific stimulus set. Modeling using spectral-temporal measures showed that neither analyses based on acoustic features, nor semantic assessments of sound-scene congruency can account for this difference, indicating the IA is a complex effect, possibly arising from the sensitivity of the auditory system to new and unexpected events, under particular listening conditions. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

4.
Observers briefly viewed random dots moving in a given direction and subsequently recalled that direction. When required to remember a single direction, observers performed accurately for memory intervals of up to 8 s; this high-fidelity memory for motion was maintained when observers executed a vigilance task during the memory interval. When observers tried to remember multiple directions of motion, performance deteriorated with increasing number of directions. Still, memory for multiple directions was unchanged over delays of up to 30 s. In a forced-choice experiment, observers viewed 2 successive animation sequences separated by a memory interval; for both sequences, dots moved in any direction within a limited bandwidth. Observers accurately judged which animation sequence was more coherent, even with memory intervals of 30 s. The findings are considered within the context of cognitive bias and memory for other aspects of perception.  相似文献   

5.
The number of individual items that can be maintained in working memory is limited. One solution to this problem is to store representations of ensembles that contain summary information about large numbers of items (e.g., the approximate number or cumulative area of a group of many items). Here we explored the developmental origins of ensemble representations by asking whether infants represent ensembles and, if so, how many at one time. We habituated 9-month-old infants to arrays containing 2, 3, or 4 spatially intermixed colored subsets of dots, then asked whether they detected a numerical change to one of the subsets or to the superset of all dots. Experiment Series 1 showed that infants detected a numerical change to 1 of the subsets when the array contained 2 subsets but not 3 or 4 subsets. Experiment Series 2 showed that infants detected a change to the superset of all dots no matter how many subsets were presented. Experiment 3 showed that infants represented both the approximate number and the cumulative surface area of these ensembles. Our results suggest that infants, like adults (Halberda, Sires, & Feigenson, 2006), can store quantitative information about 2 subsets plus the superset: a total of 3 ensembles. This converges with the known limit on the number of individual objects infants and adults can store and suggests that, throughout development, an ensemble functions much like an individual object for working memory. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

6.
Examined the relationship between children's cognitive processing of video and audio information on TV. 96 5-yr-olds viewed a videotaped segment of Sesame Street followed by comprehension and recognition tests. Ss viewed experimental segments in which (a) the audio and video tracks were from the same segment (A/V match), (b) the audio and video tracks were not from the same segment (A/V mismatch), (c) the video track was presented alone, or (d) the audio track was presented alone. This design allowed unconfounded comparisons of modality-specific processing. In the A/V mismatch condition, memory for audio information was reduced more than memory for video information. However, comprehension and recognition of audio information was similar in the audio-only and A/V match conditions. Results suggest that in regular TV programs, the video information does not interfere with processing the audio information but is more salient and memorable than the audio material. (13 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
In 4 experiments, the authors examined sex differences in audiospatial perception of sounds that moved toward and away from the listener. Experiment 1 showed that both men and women underestimated the time-to-arrival of full-cue looming sounds. However, this perceptual bias was significantly stronger among women than among men. In Experiment 2, listeners estimated the terminal distance of sounds that approached but stopped before reaching them. Women perceived the looming sounds as closer than did men. However, in Experiment 3, with greater statistical power, the authors found no sex difference in the perceived distance of sounds that traveled away from the listener, demonstrating a sex-based specificity for auditory looming perception. Experiment 4 confirmed these results using equidistant looming and receding sounds. The findings suggest that sex differences in auditory looming perception are not due to general differences in audiospatial ability, but rather illustrate the environmental salience and evolutionary importance of perceiving looming objects. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
In exp. I with 3 male and 1 female o, the bisensory recognition of simultaneously presented auditory and visual verbal information was measured as a function of auditory and visual recognition. It is shown that bisensory performance is superior to performance predicted by a model that assumes the 2 modalities are processing the information independently. Instead integrative processing is suggested. In previous studies using theory of signal detectability (tsd), independent processing of bisensory presentations of mathematically equivalent stimuli has been shown. Present results suggest that the mathematically equated stimuli in those studies were not cognitively equivalent. Exp. Ii with 3 male os lends support to this notion by (a) using verbal information in a tsd paradigm, and (b) showing that when the stimuli are equivalent the results are consistent with a model in which bisensory processing occurs integratively on a common decision axis. However, when the stimuli to each mode are not equivalent, results are consistent with a model in which bisensory processing occurs independently on separate decision axes. (french summary) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Short-term memory for the timing of irregular sequences of signals has been said to be more accurate when the signals are auditory than when they are visual. No support for this contention was obtained when the signals were beeps vs flashes (Exps 1 and 3) nor when they were sets of spoken vs typewritten digits (Exps 4 and 5). On the other hand, support was obtained both for beeps vs flashes (Exps 2 and 5) and for repetitions of a single spoken digit vs repetitions of a single typewritten digit (Exp 6) when the Ss silently mouthed a nominally irrelevant item during sequence presentation. Also, the timing of sequences of auditory signals, whether verbal (Exp 7) or nonverbal (Exps 8 and 9), was more accurately remembered when the signals within each sequence were identical. The findings are considered from a functional perspective. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
We investigated the effects of semantic priming on initial encoding of briefly presented pictures of objects and scenes. Pictures in 4 experiments were presented for varying durations and were followed immediately by a mask. In Exps 1 and 2, pictures of simple objects were either preceded or not preceded by the object's category name (e.g., dog). In Exp 1 we measured immediate object identification; in Exp 2 we measured delayed old/new recognition in which targets and distractors were from the same categories. In Exp 3 naturalistic scenes were either preceded or not preceded by the scene's category name (e.g., supermarket). We measured delayed recognition in which targets and distractors were described by the same category names. In Exps 1–3, performance was better for primed than for unprimed pictures. Exp 4 was similar to Exp 2 in that we measured delayed recognition for simple objects. As in Exps 1–3, a prime that preceded the object improved subsequent memory performance for the object. However, a prime that followed the object did not affect subsequent performance. Together, these results imply that priming leads to more efficient information acquisition. We offer a picture-processing model that accounts for these results. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
The disruption of short-term memory by to-be-ignored auditory sequences (the changing-state effect) has often been characterized as attentional capture by deviant events (deviation effect). However, the present study demonstrates that changing-state and deviation effects are functionally distinct forms of auditory distraction: The disruption of visual-verbal serial recall by changing-state speech was independent of the effect of a single deviant voice embedded within the speech (Experiment 1); a voice-deviation effect, but not a changing-state effect, was found on a missing-item task (Experiment 2); and a deviant voice repetition within the context of an alternating-voice irrelevant speech sequence disrupted serial recall (Experiment 3). The authors conclude that the changing-state effect is the result of a conflict between 2 seriation processes being applied concurrently to relevant and irrelevant material, whereas the deviation effect reflects a more general attention-capture process. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Describes 2 experiments in which visual and/or auditory location precues preceded visual or auditory targets. 30 observers were required to judge the location of the targets. Conditions were such that involuntary, stimulus-driven (SD) attention shifts were the only ones likely to occur and give rise to cueing effects. It was found that visual precues affected response time to localize both visual targets and auditory targets, but auditory precues affected only the time to localize auditory targets. Moreover, when visual and auditory cues conflicted, visual cues dominated in the visual task but were dominated by auditory cues in the auditory task. Results suggest that involuntary SD attention shifts might be controlled by a modality-specific mechanism for visual tasks, whereas SD shifts of auditory attention are controlled by a supramodal mechanism. (French abstract) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Searching for an object within a cluttered, continuously changing environment can be a very time-consuming process. The authors show that a simple auditory pip drastically decreases search times for a synchronized visual object that is normally very difficult to find. This effect occurs even though the pip contains no information on the location or identity of the visual object. The experiments also show that the effect is not due to general alerting (because it does not occur with visual cues), nor is it due to top-down cuing of the visual change (because it still occurs when the pip is synchronized with distractors on the majority of trials). Instead, we propose that the temporal information of the auditory signal is integrated with the visual signal, generating a relatively salient emergent feature that automatically draws attention. Phenomenally, the synchronous pip makes the visual object pop out from its complex environment, providing a direct demonstration of spatially nonspecific sounds affecting competition in spatial visual processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Viewers can easily spot a target picture in a rapid serial visual presentation (RSVP), but can they do so if more than 1 picture is presented simultaneously? Up to 4 pictures were presented on each RSVP frame, for 240 to 720 ms/frame. In a detection task, the target was verbally specified before each trial (e.g., man with violin); in a memory task, recognition was tested after each sequence. Target detection was much better than recognition memory, but in both tasks the more pictures on the frame, the lower the performance. When the presentation duration was set at 160 ms with a variable interframe interval such that the total times were the same as in the initial experiments, the results were similar. The results suggest that visual processing occurs in 2 stages: fast, global processing of all pictures in Stage 1 (usually sufficient for detection) and slower, serial processing in Stage 2 (usually necessary for subsequent memory). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
[Correction Notice: An erratum for this article was reported in Vol 38(1) of Canadian Journal of Psychology Revue Canadienne de Psychologie (see record 2007-03769-001). A programming error occurred in preparing Figure 6. The correct figure looks quite similar to Figure 7, except that the upper eight components are considerably smaller in Figure 6 than in Figure 7. Values quoted in the text remain unchanged.] Processing in the peripheral auditory system of the human ear profoundly alters the characteristics of all acoustic signals impinging on the ear. Some of the 1st-order properties of this peripheral processing are now reasonably well understood: Humans see a heavily overlapped set of filters, with increasingly broader bandwidths at high frequencies, which results in good spectral resolution at low frequencies and good temporal resolution at high frequencies. Results of an examination of speech and music by this system are discussed. An attempt is then made to synthesize several papers on auditory and visual psychophysics, and to speculate on auditory-signal processing analogous to visual-color processing. Several simplified auditory representations of speech are proposed. (French abstract) (25 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Conducted 2 experiments in which changes in the critical flicker fusion (CFF) were determined during and after auditory deprivation (silence). In Exp I, 36 male undergraduates were exposed to 1 wk of auditory deprivation, measurements of the CFF being taken at daily intervals and at Days 1, 2, 3, and 7 after the termination of the experimental condition. Results show that the experimental group exhibited a significant progressive improvement in visual resolving power as a function of auditory deprivation and, following its termination, a gradual decline towards the pre-experimental baseline. On the other hand, 2 control conditions, a group of confined Ss, and a non-confined group showed no systematic changes in the CFF. In Exp II with 6 new Ss, the period of auditory deprivation was extended to 14 days. Results again reveal a progressive improvement on the CFF during the 1st wk of silence followed by an asymptotic performance during the 2nd wk. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Conducted 3 divided-attention experiments with 8 adult Silver King pigeons, in which matching to the visual or auditory component of a tone–light compound was compared with matching to visual or auditory elements as sample stimuli, to investigate Ss' short-term memory for simultaneously presented visual and auditory signals. In 0-sec delayed and simultaneous matching procedures, Ss were able to match visual signals equally well when presented alone or with a tone. Tones were matched at a substantially lower level of accuracy when presented with light signals than when presented as elements. The interfering effect of a signal light on tone matching was not related to the signaling value of the light, and the prior presentation of light proactively interfered with auditory delayed matching. Findings indicate a divided-attention process in which auditory processing is strongly inhibited in the presence of visual signals. (32 ref) (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

18.
We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's “woofing”) and spoken words (e.g., /d?g/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

19.
Thresholds for the discrimination of temporal order were determined for selected auditory and visual stimulus dimensions in 10 normal-adult volunteers. Auditory stimuli consisted of binary pure tones varying in frequency or sound pressure level, and visual stimuli consisted of binary geometric forms varying in size, orientation, or color. We determined the effect of psychophysical method and the reliability of performance across stimulus dimensions. Using a single-track adaptive procedure, Experiment 1 showed that temporal-order thresholds (TOTs) varied with stimulus dimension, being lowest for auditory frequency, intermediate for size, orientation, and auditory level, and longest for color. Test performance improved over sessions and the profile of thresholds across stimulus dimensions had a modest reliability. Experiment 2 used a double-interleaved adaptive procedure and TOTs were similarly ordered as in Experiment 1. However, TOTs were significantly lower for initially ascending versus descending tracks. With this method, the reliability of the profile across stimulus dimensions and tracks was relatively low. In Experiment 3, psychometric functions were obtained for each of the stimulus dimensions and thresholds were defined as the interpolated 70.7% correct point. The relative ordering of TOTs was similar to those obtained in the first two experiments. Non-monotonicities were found in some of the psychometric functions, with the most prominent being for the color dimension. A cross-experiment comparison of results demonstrates that TOTs and their reliability are significantly influenced by the psychophysical method. Taken together, these results support the notion that the temporal resolution of ordered stimuli involves perceptual mechanisms specific to a given sensory modality or submodality.  相似文献   

20.
Three experiments in Serbo-Croatian were conducted on the effects of phonological ambiguity and lexical ambiguity on printed word recognition. Subjects decided rapidly if a printed and a spoken word matched or not. Printed words were either phonologically ambiguous (two possible pronunciations) or unambiguous. If phonologically ambiguous, either both pronunciations were real words or only one was, the other being a nonword. Spoken words were necessarily unambiguous. Half the spoken words were auditorily degraded. In addition, the relative onsets of speech and print were varied. Speed of matching print to speech was slowed by phonological ambiguity, and the effect was amplified when the stimulus was also lexically ambiguous. Auditory degradation did not interact with print ambiguity, suggesting the perception of the spoken word was independent of the printed word. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号