首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   51篇
  免费   0篇
电工技术   1篇
综合类   3篇
金属工艺   1篇
机械仪表   2篇
轻工业   4篇
一般工业技术   6篇
冶金工业   29篇
自动化技术   5篇
  2023年   4篇
  2022年   1篇
  2020年   2篇
  2017年   1篇
  2016年   3篇
  2015年   1篇
  2014年   1篇
  2013年   1篇
  2012年   1篇
  2011年   10篇
  2010年   3篇
  2009年   7篇
  2008年   4篇
  2007年   4篇
  2006年   1篇
  2004年   2篇
  2001年   5篇
排序方式: 共有51条查询结果,搜索用时 15 毫秒
1.
The naming of manipulable objects in older and younger adults was evaluated across auditory, visual, and multisensory conditions. Older adults were less accurate and slower in naming across conditions, and all subjects were more impaired and slower to name action sounds than pictures or audiovisual combinations. Moreover, there was a sensory by age group interaction, revealing lower accuracy and increased latencies in auditory naming for older adults unrelated to hearing insensitivity but modest improvement to multisensory cues. These findings support age-related deficits in object action naming and suggest that auditory confrontation naming may be more sensitive than visual naming. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
2.
Introduces a section of this Special Edition in Cognitive Neuroscience, which examines the neural mechanisms and cognitive factors that influence the formation of multimodal percepts. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
3.
Recent research indicates that by 4.5 months, infants use shape and size information as the basis for individuating objects but that it is not until 11.5 months that they use color information for this purpose. The present experiments investigated the extent to which infants' sensitivity to color information could be increased through select experiences. Five experiments were conducted with 10.5- and 9.5-month-olds. The results revealed that multimodal (visual and tactile), but not unimodal (visual only), exploration of the objects prior to the individuation task increased 10.5-month-olds' sensitivity to color differences. These results suggest that multisensory experience with objects facilitates infants' use of color information when individuating objects. In contrast, 9.5-month-olds did not benefit from the multisensory procedure; possible explanations for this finding are explored. Together, these results reveal how an everyday experience--combined visual and tactile exploration of objects--can promote infants' use of color information as the basis for individuating objects. More broadly, these results shed light on the nature of infants' object representations and the cognitive mechanisms that support infants' changing sensitivity to color differences. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
4.
Shape recognition can be achieved through vision or touch, raising the issue of how this information is shared across modalities. Here we provide a short review of previous findings on cross-modal object recognition and we provide new empirical data on multisensory recognition of actively explored objects. It was previously shown that, similar to vision, haptic recognition of objects fixed in space is orientation specific and that cross-modal object recognition performance was relatively efficient when these views of the objects were matched across the sensory modalities (Newell, Ernst, Tjan, & Bülthoff, 2001). For actively explored (i.e., spatially unconstrained) objects, we now found a cost in cross-modal relative to within-modal recognition performance. At first, this may seem to be in contrast to findings by Newell et al. (2001). However, a detailed video analysis of the visual and haptic exploration behaviour during learning and recognition revealed that one view of the objects was predominantly explored relative to all others. Thus, active visual and haptic exploration is not balanced across object views. The cost in recognition performance across modalities for actively explored objects could be attributed to the fact that the predominantly learned object view was not appropriately matched between learning and recognition test in the cross-modal conditions. Thus, it seems that participants naturally adopt an exploration strategy during visual and haptic object learning that involves constraining the orientation of the objects. Although this strategy ensures good within-modal performance, it is not optimal for achieving the best recognition performance across modalities. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
5.
We compared the ability of auditory, visual, and audiovisual (bimodal) exogenous cues to capture visuo-spatial attention under conditions of no load versus high perceptual load. Participants had to discriminate the elevation (up vs. down) of visual targets preceded by either unimodal or bimodal cues under conditions of high perceptual load (in which they had to monitor a rapidly presented central stream of visual letters for occasionally presented target digits) or no perceptual load (in which the central stream was replaced by a fixation point). The results of 3 experiments showed that all 3 cues captured visuo-spatial attention in the no-load condition. By contrast, only the bimodal cues captured visuo-spatial attention in the high-load condition, indicating for the first time that multisensory integration can play a key role in disengaging spatial attention from a concurrent perceptually demanding stimulus. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
6.
建立了一个基于视觉、听觉和力觉融合的多感知虚拟现实系统。采用一种新型解耦双并联结构的6自由度主操作手实现人机交互并提供力觉感受;设计了基于OpenGL与OpenAL的软件引擎,在PC机上实现了高性能的立体视觉与听觉。通过一个虚拟乒乓球掂球实验验证了本系统的高性能与控制方法的有效性.  相似文献   
7.
多传感器信息融合在轮式机器人运动控制中的应用   总被引:1,自引:0,他引:1  
分析了多传感器信息融合模型的建立和实现的过程,并利用多传感器信息融合技术中的信息协同和信息互补来完成对轮式机器人的运动控制。运行结果表明机器人可以半径范围为40cm~100cm的圆周上自主运行并能够灵活调整轨道半径;当机器人接近擂台边缘约5 cm时会自动检测边缘位置并根据自身的姿态情况及时调整运动方向。该系统运动控制性...  相似文献   
8.
It is currently unknown whether statistical learning is supported by modality-general or modality-specific mechanisms. One issue within this debate concerns the independence of learning in one modality from learning in other modalities. In the present study, the authors examined the extent to which statistical learning across modalities is independent by simultaneously presenting learners with auditory and visual streams. After establishing baseline rates of learning for each stream independently, they systematically varied the amount of audiovisual correspondence across 3 experiments. They found that learners were able to segment both streams successfully only when the boundaries of the audio and visual triplets were in alignment. This pattern of results suggests that learners are able to extract multiple statistical regularities across modalities provided that there is some degree of cross-modal coherence. They discuss the implications of their results in light of recent claims that multisensory statistical learning is guided by modality-independent mechanisms. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   
9.
Predictive coding (PC) is a leading theory of cortical function that has previously been shown to explain a great deal of neurophysiological and psychophysical data. Here it is shown that PC can perform almost exact Bayesian inference when applied to computing with population codes. It is demonstrated that the proposed algorithm, based on PC, can: decode probability distributions encoded as noisy population codes; combine priors with likelihoods to calculate posteriors; perform cue integration and cue segregation; perform function approximation; be extended to perform hierarchical inference; simultaneously represent and reason about multiple stimuli; and perform inference with multi-modal and non-Gaussian probability distributions. PC thus provides a neural network-based method for performing probabilistic computation and provides a simple, yet comprehensive, theory of how the cerebral cortex performs Bayesian inference.  相似文献   
10.
This paper presents the autonomous, non-intrusive and universal electronic equipment PGAMS: Portable Global Activity Measurement System, which is able to simultaneously measure and synchronously register the treble driver-vehicle-route activity. It allows one to evaluate the direct effect that real road traffic conditions have on environmental pollution. Among the variables to be simultaneously measured are: (a) from vehicle activity: regime engine and temperature, fuel consumption, global position, instantaneous and average speed and acceleration, frontal and lateral inclination; (b) related to route features: weather conditions, slope and camber of the road, traffic density; and (c) derived from driver activity: clutch, throttle and brake pedal actions. The designed prototype can be on boarded in conventional or industrial vehicles, working in both urban and inter-urban conditions. The paper describes the integration with pollutant emissions meters. Besides, it includes experimental results carried out in real traffic conditions relating vehicle-driver-route activity with gasses emissions.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号