首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   681篇
  免费   76篇
  国内免费   29篇
电工技术   15篇
综合类   41篇
化学工业   42篇
金属工艺   12篇
机械仪表   44篇
建筑科学   7篇
能源动力   2篇
轻工业   17篇
水利工程   1篇
武器工业   2篇
无线电   79篇
一般工业技术   88篇
冶金工业   258篇
原子能技术   3篇
自动化技术   175篇
  2024年   3篇
  2023年   8篇
  2022年   15篇
  2021年   20篇
  2020年   12篇
  2019年   19篇
  2018年   15篇
  2017年   23篇
  2016年   21篇
  2015年   16篇
  2014年   26篇
  2013年   28篇
  2012年   34篇
  2011年   56篇
  2010年   40篇
  2009年   43篇
  2008年   74篇
  2007年   43篇
  2006年   40篇
  2005年   42篇
  2004年   55篇
  2003年   34篇
  2002年   33篇
  2001年   20篇
  2000年   10篇
  1999年   3篇
  1998年   2篇
  1997年   6篇
  1996年   3篇
  1994年   2篇
  1993年   2篇
  1992年   3篇
  1988年   1篇
  1987年   1篇
  1985年   1篇
  1984年   3篇
  1979年   1篇
  1968年   1篇
  1967年   2篇
  1966年   2篇
  1965年   3篇
  1964年   3篇
  1963年   5篇
  1962年   1篇
  1961年   1篇
  1960年   1篇
  1959年   2篇
  1957年   1篇
  1956年   2篇
  1954年   2篇
排序方式: 共有786条查询结果,搜索用时 15 毫秒
1.
2.
Research on cross-modal performance in nonhuman primates is limited to a small number of sensory modalities and testing methods. To broaden the scope of this research, the authors tested capuchin monkeys (Cebus apella) for a seldom-studied cross-modal capacity in nonhuman primates, auditory-visual recognition. Monkeys were simultaneously played 2 video recordings of a face producing different vocalizations and a sound recording of 1 of the vocalizations. Stimulus sets varied from naturally occurring conspecific vocalizations to experimentally controlled human speech stimuli. The authors found that monkeys preferred to view face recordings that matched presented vocal stimuli. Their preference did not differ significantly across stimulus species or other stimulus features. However, the reliability of the latter set of results may have been limited by sample size. From these results, the authors concluded that capuchin monkeys exhibit auditory-visual cross-modal perception of conspecific vocalizations. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
3.
Although word stress has been hailed as a powerful speech-segmentation cue, the results of 5 cross-modal fragment priming experiments revealed limitations to stress-based segmentation. Specifically, the stress pattern of auditory primes failed to have any effect on the lexical decision latencies to related visual targets. A determining factor was whether the onset of the prime was coarticulated with the preceding speech fragment. Uncoarticulated (i.e., concatenated) primes facilitated priming. Coarticulated ones did not. However, when the primes were presented in a background of noise, the pattern of results reversed, and a strong stress effect emerged: Stress-initial primes caused more pruning than non-initial-stress primes, regardless of the coarticulatory cues. The results underscore the role of coarticulation in the segmentation of clear speech and that of stress in impoverished listening conditions. More generally, they call for an integrated and signal-contingent approach to speech segmentation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
4.
Volkmer  Markus 《Natural computing》2004,3(2):177-193
The existence of spectro-temporal receptive fields and evidence for population coding in auditory cortex motivate the development of such models, that explicitly operate in the time-frequency domain and are based on a pulsed neural network. In presenting such a model, a formal connection of the fields of Time Frequency Analysis and Pulsed Neural Networks is established. The resulting neural time-frequency signal representation is shown to be representable as a signal-dependent overcomplete dictionary. It is derived from neural population coding. Signal decomposition and filtering effects are presented, indicating obvious technical applications of the proposed model. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   
5.
对听觉滤波器组模型进行了研究。比较分析了伽马通(Gammatone)听觉滤波器组模型和基于小波分析的高斯小波听觉滤波器组模型,分别给出了预测听阈曲线并且与标准听闻曲线进行了比较,结果表明高斯小波听觉滤波器组模型更接近实际人耳滤波效果。将高斯小波听觉滤波器组模型的预测特征响度与德国朗德(Head Acoustics)公司的Artemis软件计算出的特征响度进行分析比较,结果表明,每个特征频带的响度差值都小于0.6 sone。  相似文献   
6.
Three experiments were designed to test whether perception and action are coordinated in a way that distinguishes sequencing from timing (Pfordresher, 2003). Each experiment incorporated a trial design in which altered auditory feedback (AAF) was presented for varying lengths of time and then withdrawn. Experiments 1 and 2 included AAF that resulted in action-effect asynchronies (delayed auditory feedback) during simple tapping (Experiment 1) and melody production (Experiment 2). Asynchronous AAF immediately slowed production; this effect then diminished rapidly after removal of AAF. By contrast, sequential alterations of feedback pitch during melody production (Experiment 3) had an effect that varied over successive presentations of AAF (by increasing error rates) that lasted after its withdrawal. The presence of auditory feedback after withdrawal of asynchronous AAF (Experiments 1 and 2) led to overcompensation of timing, whereas the presence of auditory feedback did not influence performance after withdrawal of AAF in Experiment 3. Based on these results, we suggest that asynchronous AAF perturbs the phase of an internal timekeeper, whereas alterations to feedback pitch over time degrade the internal representation of sequence structure. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   
7.
基于计算听觉场景分析的强噪声背景下基音检测方法   总被引:4,自引:0,他引:4  
本文针对强噪声背景下语音基音检测这一既具实际应用,又相当困难的问题,提出了一种基于计算听觉场景分析的基音检测方法。该方法利用了人的听觉感知特性,适用于低信噪比和存在其它语音干扰下基音信息的提取。实验结果证明,本文提出的方法是非常有效的。  相似文献   
8.
基于最大似然估计的子带语音去噪的研究   总被引:1,自引:0,他引:1  
对基本的谱减算法进行了改进。利用满足听觉感知模型的滤波器组对语音信号进行处理,然后通过自动跟踪信号的低能量部分的包络来估计背景噪声的特性,最后利用改进的谱减技术对子带语音进行滤波并增强。实验表明,改进的谱减技术具有良好的性能。  相似文献   
9.
何璞 《电声技术》2007,31(11):41-43
讨论听觉距离定位的研究情况,总结了包括声源的声压级和频谱、听音环境引起的反射声以及人头和耳壳引起的散射作用等声学因素。  相似文献   
10.
吕俊 《电子科技》2011,24(7):1-2,17
提出了一种假手控制系统,该系统能够综合利用头皮和表面肌电信号的信息,实时地解码手部运动的速度、加速度和轨迹,从而实现对假手的灵活控制,且无需植入电极,便于维护。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号