排序方式: 共有51条查询结果,搜索用时 15 毫秒
1.
The naming of manipulable objects in older and younger adults was evaluated across auditory, visual, and multisensory conditions. Older adults were less accurate and slower in naming across conditions, and all subjects were more impaired and slower to name action sounds than pictures or audiovisual combinations. Moreover, there was a sensory by age group interaction, revealing lower accuracy and increased latencies in auditory naming for older adults unrelated to hearing insensitivity but modest improvement to multisensory cues. These findings support age-related deficits in object action naming and suggest that auditory confrontation naming may be more sensitive than visual naming. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
2.
Introduces a section of this Special Edition in Cognitive Neuroscience, which examines the neural mechanisms and cognitive factors that influence the formation of multimodal percepts. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
3.
Wilcox Teresa; Woods Rebecca; Chapa Catherine; McCurry Sarah 《Canadian Metallurgical Quarterly》2007,43(2):479
Recent research indicates that by 4.5 months, infants use shape and size information as the basis for individuating objects but that it is not until 11.5 months that they use color information for this purpose. The present experiments investigated the extent to which infants' sensitivity to color information could be increased through select experiences. Five experiments were conducted with 10.5- and 9.5-month-olds. The results revealed that multimodal (visual and tactile), but not unimodal (visual only), exploration of the objects prior to the individuation task increased 10.5-month-olds' sensitivity to color differences. These results suggest that multisensory experience with objects facilitates infants' use of color information when individuating objects. In contrast, 9.5-month-olds did not benefit from the multisensory procedure; possible explanations for this finding are explored. Together, these results reveal how an everyday experience--combined visual and tactile exploration of objects--can promote infants' use of color information as the basis for individuating objects. More broadly, these results shed light on the nature of infants' object representations and the cognitive mechanisms that support infants' changing sensitivity to color differences. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
4.
Shape recognition can be achieved through vision or touch, raising the issue of how this information is shared across modalities. Here we provide a short review of previous findings on cross-modal object recognition and we provide new empirical data on multisensory recognition of actively explored objects. It was previously shown that, similar to vision, haptic recognition of objects fixed in space is orientation specific and that cross-modal object recognition performance was relatively efficient when these views of the objects were matched across the sensory modalities (Newell, Ernst, Tjan, & Bülthoff, 2001). For actively explored (i.e., spatially unconstrained) objects, we now found a cost in cross-modal relative to within-modal recognition performance. At first, this may seem to be in contrast to findings by Newell et al. (2001). However, a detailed video analysis of the visual and haptic exploration behaviour during learning and recognition revealed that one view of the objects was predominantly explored relative to all others. Thus, active visual and haptic exploration is not balanced across object views. The cost in recognition performance across modalities for actively explored objects could be attributed to the fact that the predominantly learned object view was not appropriately matched between learning and recognition test in the cross-modal conditions. Thus, it seems that participants naturally adopt an exploration strategy during visual and haptic object learning that involves constraining the orientation of the objects. Although this strategy ensures good within-modal performance, it is not optimal for achieving the best recognition performance across modalities. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
5.
We compared the ability of auditory, visual, and audiovisual (bimodal) exogenous cues to capture visuo-spatial attention under conditions of no load versus high perceptual load. Participants had to discriminate the elevation (up vs. down) of visual targets preceded by either unimodal or bimodal cues under conditions of high perceptual load (in which they had to monitor a rapidly presented central stream of visual letters for occasionally presented target digits) or no perceptual load (in which the central stream was replaced by a fixation point). The results of 3 experiments showed that all 3 cues captured visuo-spatial attention in the no-load condition. By contrast, only the bimodal cues captured visuo-spatial attention in the high-load condition, indicating for the first time that multisensory integration can play a key role in disengaging spatial attention from a concurrent perceptually demanding stimulus. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
6.
7.
8.
It is currently unknown whether statistical learning is supported by modality-general or modality-specific mechanisms. One issue within this debate concerns the independence of learning in one modality from learning in other modalities. In the present study, the authors examined the extent to which statistical learning across modalities is independent by simultaneously presenting learners with auditory and visual streams. After establishing baseline rates of learning for each stream independently, they systematically varied the amount of audiovisual correspondence across 3 experiments. They found that learners were able to segment both streams successfully only when the boundaries of the audio and visual triplets were in alignment. This pattern of results suggests that learners are able to extract multiple statistical regularities across modalities provided that there is some degree of cross-modal coherence. They discuss the implications of their results in light of recent claims that multisensory statistical learning is guided by modality-independent mechanisms. (PsycINFO Database Record (c) 2011 APA, all rights reserved) 相似文献
9.
M. W. Spratling 《连接科学》2016,28(4):346-383
Predictive coding (PC) is a leading theory of cortical function that has previously been shown to explain a great deal of neurophysiological and psychophysical data. Here it is shown that PC can perform almost exact Bayesian inference when applied to computing with population codes. It is demonstrated that the proposed algorithm, based on PC, can: decode probability distributions encoded as noisy population codes; combine priors with likelihoods to calculate posteriors; perform cue integration and cue segregation; perform function approximation; be extended to perform hierarchical inference; simultaneously represent and reason about multiple stimuli; and perform inference with multi-modal and non-Gaussian probability distributions. PC thus provides a neural network-based method for performing probabilistic computation and provides a simple, yet comprehensive, theory of how the cerebral cortex performs Bayesian inference. 相似文献
10.
Felipe Espinosa José A. JiménezEnrique Santiso Alfredo GardelDiego Pérez Jesús CasanovaCarlos Santos 《Measurement》2011,44(2):326-337
This paper presents the autonomous, non-intrusive and universal electronic equipment PGAMS: Portable Global Activity Measurement System, which is able to simultaneously measure and synchronously register the treble driver-vehicle-route activity. It allows one to evaluate the direct effect that real road traffic conditions have on environmental pollution. Among the variables to be simultaneously measured are: (a) from vehicle activity: regime engine and temperature, fuel consumption, global position, instantaneous and average speed and acceleration, frontal and lateral inclination; (b) related to route features: weather conditions, slope and camber of the road, traffic density; and (c) derived from driver activity: clutch, throttle and brake pedal actions. The designed prototype can be on boarded in conventional or industrial vehicles, working in both urban and inter-urban conditions. The paper describes the integration with pollutant emissions meters. Besides, it includes experimental results carried out in real traffic conditions relating vehicle-driver-route activity with gasses emissions. 相似文献