首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Research has shown the existence of perceptual and neural bias toward sounds perceived as sources approaching versus receding a listener. It has been suggested that a greater biological salience of approaching auditory sources may account for these effects. In addition, these effects may hold only for those sources critical for our survival. In the present study, we bring support to these hypotheses by quantifying the emotional responses to different sounds with changing intensity patterns. In 2 experiments, participants were exposed to artificial and natural sounds simulating approaching or receding sources. The auditory-induced emotional effect was reflected in the performance of participants in an emotion-related behavioral task, their self-reported emotional experience, and their physiology (electrodermal activity and facial electromyography). The results of this study suggest that approaching unpleasant sound sources evoke more intense emotional responses in listeners than receding ones, whereas such an effect of perceived sound motion does not exist for pleasant or neutral sound sources. The emotional significance attributed to the sound source itself, the loudness of the sound, and loudness change duration seem to be relevant factors in this disparity. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
According to the “sensory-motor model of semantic knowledge,” different categories of knowledge differ for the weight that different “sources of knowledge” have in their representation. Our study aimed to evaluate this model, checking if subjective evaluations given by normal subjects confirm the different weight that various sources of knowledge have in the representation of different biological and artifact categories and of unique entities, such as famous people or monuments. Results showed that the visual properties are considered as the main source of knowledge for all the living and nonliving categories (as well as for unique entities), but that the clustering of these “sources of knowledge” is different for biological and artifacts categories. Visual data are, indeed, mainly associated with other perceptual (auditory, olfactory, gustatory, and tactual) attributes in the mental representation of living beings and unique entities, whereas they are associated with action-related properties and tactile information in the case of artifacts. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Since the introduction of the concept of auditory scene analysis, there has been a paucity of work focusing on the theoretical explanation of how attention is allocated within a complex auditory scene. Here we examined signal detection in situations that promote either the fusion of tonal elements into a single sound object or the segregation of a mistuned element (i.e., harmonic) that “popped out” as a separate individuated auditory object and yielded the perception of concurrent sound objects. On each trial, participants indicated whether the incoming complex sound contained a brief gap or not. The gap (i.e., signal) was always inserted in the middle of one of the tonal elements. Our findings were consistent with an object-based account in which perception of two simultaneous auditory objects interfered with signal detection. This effect was observed for a wide range of gap durations and was greater when the mistuned harmonic was perceived as a separate object. These results suggest that attention may be initially shared among concurrent sound objects thereby reducing listeners' ability to process acoustic details belonging to a particular sound object. These findings provide new theoretical insight for our understanding of auditory attention and auditory scene analysis. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

4.
Sociality may determine the subjective experience and physiological response to emotional stimuli. Film segments induced socially and nonsocially generated emotions. Comedy (social positive), bereavement (social negative), pizza scenes (nonsocial positive), and wounded bodies (nonsocial negative) elicited four distinct emotional patterns. Per subjective report, joy, sadness, appetite, and disgust were elicited by the targeted stimulus condition. The social/nonsocial dimension influenced which emotional valence(s) elicited a skin conductance response, a finding that could not be explained by differences in subjective arousal. Heart rate deceleration was more responsive to nonsocially generated emotions. Taken together, these findings suggest that sociality affects the physiological profile of responses to emotional valence. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
The dorsal cochlear nucleus (DCN) is one of three nuclei at the terminal zone of the auditory nerve. Axons of its projection neurons course via the dorsal acoustic stria (DAS) to the inferior colliculus (IC), where their signals are integrated with inputs from various other sources. The DCN presumably conveys sensitivity to spectral features, and it has been hypothesized that it plays a role in sound localization based on pinna cues. To account for its remarkable spectral properties, a DCN circuit scheme was developed in which three inputs converge onto projection neurons: auditory nerve fibers, inhibitory interneurons, and wide-band inhibitors, which possibly consist of Onset-chopper (Oc) cells. We studied temporal and binaural properties in DCN and DAS and examined whether the temporal properties are consistent with the model circuit. Interneurons (type II) and projection (types III and IV) neurons differed from Oc cells by their longer latencies and temporally nonlinear responses to amplitude-modulated tones. They also showed evidence of early inhibition to clicks. All projection neurons examined were inhibited by stimulation of the contralateral ear, particularly by broadband noise, and this inhibition also had short latency. Because Oc cells had short-latency responses and were well driven by broadband stimuli, we propose that they provide short-latency inhibition to DCN for both ipsilateral and contralateral stimuli. These results indicate more complex temporal behavior in DCN than has previously been emphasized, but they are consistent with the recently described nonlinear behavior to spectral manipulations and with the connectivity scheme deduced from such manipulations.  相似文献   

6.
Auditory stream segregation (or streaming) is a phenomenon in which 2 or more repeating sounds differing in at least 1 acoustic attribute are perceived as 2 or more separate sound sources (i.e., streams). This article selectively reviews psychophysical and computational studies of streaming and comprehensively reviews more recent neurophysiological studies that have provided important insights into the mechanisms of streaming. On the basis of these studies, segregation of sounds is likely to occur beginning in the auditory periphery and continuing at least to primary auditory cortex for simple cues such as pure-tone frequency but at stages as high as secondary auditory cortex for more complex cues such as periodicity pitch. Attention-dependent and perception-dependent processes are likely to take place in primary or secondary auditory cortex and may also involve higher level areas outside of auditory cortex. Topographic maps of acoustic attributes, stimulus-specific suppression, and competition between representations are among the neurophysiological mechanisms that likely contribute to streaming. A framework for future research is proposed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Ethnographic accounts suggest that emotions are moderated in Chinese cultures and expressed openly in Mexican cultures. The authors tested this notion by comparing subjective, behavioral, and physiological aspects of emotional responses to 3 (warned, unwarned, instructed to inhibit responding) aversive acoustic startle stimuli in 95 Chinese Americans and 64 Mexican Americans. Subjective reports were consistent with ethnographic accounts; Chinese Americans reported experiencing significantly less emotion than Mexican Americans across all 3 startle conditions. Evidence from a nonemotional task suggested that these differences were not artifacts of cultural differences in the use of rating scales. Few cultural differences were found in emotional behavior or physiology, suggesting that these aspects of emotion are less susceptible to cultural influence. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
Comments on the original article, "Assessing yourself as an emotional eater: Mission impossible?" by C. Evers, D. T. D. de Ridder, and M. A. Adriaanse (see record 2009-20990-009). Results of a functional MRI study (Bohon, Stice, & Spoor, 2009) contradict the assertion that it is "impossible" to self-assess emotional eating because the self-report emotional eating scale of the Dutch Eating Behavior Questionnaire (DEBQ-em) predicted important individual differences in reward response during negative moods. Evers et al advance their argument in the context of results of four experiments where self-reported “emotional eaters” (DEBQ-em) did not eat more food during emotional encounters as compared to control conditions or “no emotional eaters.” However, the core characteristic of emotional eaters is not that they eat so much during distress (though binge eaters may do), but that they do not show the typical stress response of eating less (the typical stress response being loss of appetite because of physiological effects that mimic satiety) (Gold & Chrousos, 2002). Accordingly, the moderator effect of emotional eating during distress would be that. “No emotional eaters” eat less and “emotional eaters” eat the same or more compared to control conditions. Close inspection of the results of Evers et al reveals that their “no emotional eaters” did not show the typical stress response of eating less. This opens the possibility that the null findings of Evers et al may be simply explained by misclassification of “no emotional eaters” versus “emotional eaters” because of their use of median splits (a procedure notorious for possible misclassification of subjects into distinct groups). (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Using the magnetic search coil technique to measure eye and ear movements, we trained cats by operant conditioning to look in the direction of light and sound sources with their heads fixed. Cats were able to localize noise bursts, single clicks, or click trains presented from sources located on the horizontal and vertical meridians within their oculomotor range. Saccades to auditory targets were less accurate and more variable than saccades to visual targets at the same spatial positions. Localization accuracy of single clicks was diminished compared with the long-duration stimuli presented from the same sources. Control experiments with novel auditory targets, never associated with visual targets, demonstrated that the cats localized the sound sources using acoustic cues and not from memory. The role of spectral features imposed by the pinna for vertical sound localization was shown by the breakdown in localization of narrow-band (one-sixth of an octave) noise bursts presented from sources along the midsagittal plane. In addition, we show that cats experience summing localization, an illusion associated with the precedence effect. Pairs of clicks presented from speakers at (+/-18 degrees,0 degrees ) with interclick delays of +/-300 microsec were perceived by the cat as originating from phantom sources extending from the midline to approximately +/-10 degrees.  相似文献   

10.
We have described the acoustic pathway from the ear to the diencephalon in a sound-producing fish (Pollimyrus) based on simultaneous neurophysiological recordings from single neurons and injections of biotin pathway tracers at the recording sites. Fundamental transformations of auditory information from highly phase-locked and entrained responses in primary eighth nerve afferents and first-order medullary neurons to more weakly phase-locked responses in the auditory midbrain were revealed by physiological recordings. Anatomical pathway tracing uncovered a bilateral array of both first- and second-order medullary nuclei and a perilemniscal nucleus. Interconnections within the medullary auditory areas were extensive. Medullary nuclei projected to the auditory midbrain by means of the lateral lemniscus. Midbrain auditory areas projected to both ipsilateral and contralateral optic tecta and to an array of three nuclei in the auditory thalamus. The significance of these findings to the elucidation of mechanisms for the analysis of communication sounds and spatial hearing in this vertebrate animal is discussed.  相似文献   

11.
Emotion strengthens the subjective experience of recollection. However, these vivid and confidently remembered emotional memories may not necessarily be more accurate. We investigated whether the subjective sense of recollection for negative stimuli is coupled with enhanced memory accuracy for contextual details using the remember/know paradigm. Our results indicate a double-dissociation between the subjective feeling of remembering, and the objective memory accuracy for details of negative and neutral scenes. “Remember” judgments were boosted for negative relative to neutral scenes. In contrast, memory for contextual details and associative binding was worse for negative compared to neutral scenes given a “remember” response. These findings show that the enhanced subjective recollective experience for negative stimuli does not reliably indicate greater objective recollection, at least of the details tested, and thus may be driven by a different mechanism than the subjective recollective experience for neutral stimuli. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

12.
Detection of auditory signals by frog inferior collicular neurons in the presence of spatially separated noise. J. Neurophysiol. 80: 2848-2859, 1998. Psychophysical studies have shown that the ability to detect auditory signals embedded in noise improves when signal and noise sources are widely separated in space; this allows humans to analyze complex auditory scenes, as in the cocktail-part effect. Although these studies established that improvements in detection threshold (DT) are due to binaural hearing, few physiological studies were undertaken, and very little is known about the response of single neurons to spatially separated signal and noise sources. To address this issue we examined the responses of neurons in the frog inferior colliculus (IC) to a probe stimulus embedded in a spatially separated masker. Frogs perform auditory scene analysis because females select mates in dense choruses by means of auditory cues. Results of the extracellular single-unit recordings demonstrate that 22% of neurons (A-type) exhibited improvements in signal DTs when probe and masker sources were progressively separated in azimuth. In contrast, 24% of neurons (V-type) showed the opposite pattern, namely, signal DTs were lowest when probe and masker were colocalized (in many instances lower than the DT to probe alone) and increased when the two sound sources were separated. The remaining neurons demonstrated a mix of these two types of patterns. An intriguing finding was the strong correlation between A-type masking release patterns and phasic neurons and a weaker correlation between V-type patterns and tonic neurons. Although not decisive, these results suggest that phasic units may play a role in release from masking observed psychophysically. Analysis of the data also revealed a strong and nonlinear interaction among probe, masker, and masker azimuth and that signal DTs were influenced by two factors: 1) the unit's sensitivity to probe in the presence of masker and 2) the criterion level for estimating DT. For some units, it was possible to examine the interaction between these two factors and gain insights into the variation of DTs with masker azimuth. The implications of these findings are discussed in relation to signal detection in the auditory system.  相似文献   

13.
Two experiments examined listeners' ability to discriminate the geometric shape of simple resonating bodies on the basis of their corresponding auditory attributes. In cross-modal matching tasks, subjects listened to recordings of pairs of metal bars (Experiment 1) or wooden bars (Experiment 2) struck in sequence and then selected a visual depiction of the bar cross sections that correctly represented their relative widths and heights from two opposing pairs presented on a computer screen. Multidimensional scaling solutions derived from matching scores for metal and wooden bars indicated that subjects' performance varied directly with increasing differences in the width/height (W/H) ratios of both sets of bars. Subsequent acoustic analyses revealed that the frequency components from torsional vibrational modes and the ratios of frequencies of transverse bending modes in the bars correlated strongly with both the bars' W/H ratios and bar coordinates in the multidimensional configurations. The results suggest that listeners can encode the auditory properties of sound sources by extracting certain invariant physical characteristics of their gross geometric properties from their acoustic behavior.  相似文献   

14.
Psychologists have long been interested in the integrated specificity hypothesis, which maintains that stressors elicit fairly distinct behavioral, emotional, and biological responses that are molded by selective pressures to meet specific demands from the environment. This issue of Psychological Bulletin features a meta-analytic review of the evidence for this proposition by T. F. Denson, M. Spanovic, and N. Miller (see record 2009-19763-001). Their review concluded that the meta-analytic findings support the “core concept behind the integrated specificity model” (p. 845) and reveal that “within the context of a stressful event, organisms produce an integrated and coordinated response at multiple levels (i.e., cognitive, emotional, physiological)” (p. 845). I argue that conclusions such as this are unwarranted, given the data. Aside from some effects for cortisol, little evidence of specificity was presented, and most of the significant findings reported would be expected by chance alone. I also contend that Denson et al. failed to consider some important sources of evidence bearing on the specificity hypothesis, particularly how appraisals and emotions couple with autonomic nervous system endpoints and functional indices of immune response. If selective pressures did give rise to an integrated stress response, such pathways almost certainly would have been involved. By omitting such outcomes from the meta-analysis, Denson et al. overlooked what are probably the most definitive tests of the specificity hypothesis. As a result, the field is back where it started: with a lot of affection for the concept of integrated specificity but little in the way of definitive evidence to refute or accept it. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
During metamorphosis, ranid frogs shift from a purely aquatic to a partly terrestrial lifestyle. The central auditory system undergoes functional and neuroanatomical reorganization in parallel with the development of new sound conduction pathways adapted for the detection of airborne sounds. Neural responses to sounds can be recorded from the auditory midbrain of tadpoles shortly after hatching, with higher rates of synchronous neural activity and lower sharpness of tuning than observed in postmetamorphic animals. Shortly before the onset of metamorphic climax, there is a brief "deaf" period during which no auditory activity can be evoked from the midbrain, and a loss of connectivity is observed between medullary and midbrain auditory nuclei. During the final stages of metamorphic development, auditory function and neural connectivity are restored. The acoustic communication system of the adult frog emerges from these periods of anatomical and physiological plasticity during metamorphosis.  相似文献   

16.
Previously, we found that during films about age-typical losses, older adults experienced greater sadness than young adults, whereas their physiological responses were just as large. In the present study, our goal was to replicate this finding and extend past work by examining the role of cognitive functioning in age differences in emotional reactivity. We measured the autonomic and subjective responses of 240 adults (age range = 20 to 70) while they viewed films about age-typical losses from our previous work. Findings were fully supportive of our past work: The magnitude of subjective reactions to our films increased linearly over the adult years, whereas there were no age differences on the level of physiological reactivity. We also found that the subjective reactions of adults with high pragmatic intelligence were of moderate size independent of their own age or the age relevance of the emotion elicitor. In contrast, the subjective reactions of adults low on pragmatic intelligence were more variable. Together, this evidence suggests that research on age differences in emotional reactivity may benefit from a perspective that considers individual difference variables as well as contextual variations. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Three experiments studied auditory streaming using sequences of alternating “ABA” triplets, where “A” and “B” were 50-ms tones differing in frequency by Δf semitones and separated by 75-ms gaps. Experiment 1 showed that detection of a short increase in the gap between a B tone and the preceding A tone, imposed on one ABA triplet, was better when the delay occurred early versus late in the sequence, and for Δf = 4 vs. Δf = 8. The results of this experiment were consistent with those of a subjective streaming judgment task. Experiment 2 showed that the detection of a delay 12.5 s into a 13.5-s sequence could be improved by requiring participants to perform a task on competing stimuli presented to the other ear for the first 10 s of that sequence. Hence, adding an additional task demand could improve performance via its effect on the perceptual organization of a sound sequence. The results demonstrate that attention affects streaming in an objective task and that the effects of build-up are not completely under voluntary control. In particular, even though build-up can impair performance in an objective task, participants are unable to prevent this from happening. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

18.
Onsets are salient and important transient (i.e. dynamic) features of acoustic signals, and evoke vigorous responses from most auditory neurons, but paradoxically these onset responses have most often been analysed with respect to steady-state stimulus features, e.g. the sound pressure level (SPL). In nearly all studies concerned with the coding of differences in SPL at the two ears (interaural level differences; ILDs), which provide a major cue for the azimuthal location of high frequency sound sources, interaural onset disparities were covaried with ILD, but the possibly confounding effects of this covariation on neuronal responses have been entirely neglected. Therefore, dichotic stimulus paradigms were designed here in which onset and steady-state features were varied independently. Responses were recorded from single neurons in the inferior colliculus of rats, anaesthetized with pentobarbital and xylazine. It is demonstrated that onset responses, or the onset response components of neurons with more complex temporal response patterns, are dependent on the binaural combination of dynamic envelope features associated with conventional ILD stimulus paradigms, but not on the binaural combination of steady-state SPLs reached after the onset. In contrast, late or sustained response components appear more sensitive to the binaural combination of steady-state SPLs. These data stress the general necessity for a separate analysis of onset and late response components, with respect to different stimulus features, and suggest a need for re-evaluation of existing studies on ILD coding. The sensitivity of onset responses to the binaural combination of envelope transients, rather than to steady-state ILD, is in line with their sensitivity to other interaural envelope disparities, created by stationary or moving sounds.  相似文献   

19.
Fear-inducing stimuli were hypothesized to elicit fast heart rate (HR) responses but slow mean arterial blood pressure (MAP) responses and thus were studied in auditory fear conditioning and acoustic startle at high temporal resolution in freely moving mice and rats. Fear-induced instantaneous acceleration of HR reaching maximum physiological values and subsequent recovery to baseline were observed. The MAP response consisted of an immediate, mild, and transient increase followed by a sluggish, profound elevation and slow recovery. HR and MAP responses served as reliable indicators of conditioned fear in mice with dissociated temporal dynamics. Unconditioned auditory stimuli, including acoustic startle stimuli, elicited only fast, mild, and transient MAP and HR elevations in mice and rats, reflecting arousal and attention under these experimental conditions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Contends that in the literature on the vocal expression of emotion, there is a discrepancy between reported high accuracy in vocal-auditory recognition and a lack of evidence for the acoustic differentiation of vocal expression. The latter is explained by (a) a paucity of research on voice quality, (b) neglect of the social signaling functions of affect vocalization, and (c) insufficiently precise conceptualization of the underlying emotional states. A component-patterning model of vocal affect expression is proposed that attempts to link the outcomes of antecedent event evaluation to biologically based response patterns. The likely phonatory and articulatory correlates of the physiological responses characterizing different emotional states are described in the form of 3 major voice types (narrow/wide, lax/tense, full/thin). Specific predictions about changes in acoustic parameters resulting from changing voice types are compared with the pattern of empirical findings yielded by a comprehensive survey of the literature on vocal cues in emotional expression. Although the comparison is largely limited to the lax/tense voice type (because acoustic parameters relevant to the other voice types have not yet been systematically studied), a high degree of convergence is revealed. (120 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号