首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   54篇
  免费   32篇
  国内免费   29篇
电工技术   1篇
综合类   4篇
轻工业   6篇
无线电   4篇
一般工业技术   1篇
冶金工业   23篇
自动化技术   76篇
  2024年   10篇
  2023年   25篇
  2022年   29篇
  2021年   15篇
  2020年   5篇
  2019年   3篇
  2015年   1篇
  2013年   1篇
  2011年   4篇
  2010年   2篇
  2009年   3篇
  2008年   2篇
  2007年   6篇
  2006年   1篇
  2004年   2篇
  2003年   1篇
  2002年   2篇
  2001年   1篇
  1998年   1篇
  1992年   1篇
排序方式: 共有115条查询结果,搜索用时 15 毫秒
51.
Prepulse inhibition (PPI) is an operational measure of sensorimotor gating that is thought to probe preattentional filtering mechanisms. PPI is deficient in several neuropsychiatric disorders, possibly reflecting abnormalities in frontal-cortical-striatal circuitry. Several studies support the predictive validity of animal PPI to model human sensorimotor gating phenomena but only limited studies have addressed the effects of aging. Studies in humans suggest that PPI is improved or unaffected as humans age (>60 years) and does not correlate with cognitive decline in aged populations. Rodent studies to date, however, suggest that PPI declines with age. Here we tested the hypothesis that PPI measures in rodents are sensitive to stimulus modality, with the prediction that intact sensory modalities in aged animals would be predictive of aging-induced increases in PPI. To test our hypothesis, we assessed PPI using acoustic, tactile, and visual prepulses in young (4 month) and old (23 month) C57BL/6N mice. Consistent with data across species, we observed reduced startle reactivity in older mice. Aging effects on PPI interacted significantly with prepulse modality, with deficient acoustic PPI but increased visual and tactile PPI in aged animals. These data are therefore consistent with PPI studies in older humans when controlling for hearing impairments. The results are discussed in terms of 1) cross-species translational validity for mouse PPI testing, 2) the need for startle reactivity differences to be accounted for in PPI analyses, and 3) the utility of cross-modal PPI testing in subjects where hearing loss has been documented. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
52.
行人再识别通过大时空范围内跨摄像机目标行人图像的检索与匹配,可实现人脸等生物特征失效情况下的行人关联,已成为智能视频监控系统的关键环节和支撑技术,并在智慧公安、智慧城市等国民经济建设中发挥了重要作用。近年行人再识别技术吸引了越来越多的关注,并取得了快速发展与进步。本文在对行人再识别技术进行简介的基础上,面向行人再识别的技术发展和落地应用需求与挑战,总结分析遮挡行人再识别、无监督行人再识别、虚拟数据生成、域泛化行人再识别、换装行人再识别、跨模态行人再识别和行人搜索等热点方向的前沿进展,归纳其发展现状和存在问题,最后对行人再识别技术的发展趋势进行展望。希望通过总结和分析,能够为研究人员开展行人再识别相关研究、推动行人再识别技术进步提供参考。  相似文献   
53.
In recent years, the development of deep learning has further improved hash retrieval technology. Most of the existing hashing methods currently use Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) to process image and text information, respectively. This makes images or texts subject to local constraints, and inherent label matching cannot capture fine-grained information, often leading to suboptimal results. Driven by the development of the transformer model, we propose a framework called ViT2CMH mainly based on the Vision Transformer to handle deep Cross-modal Hashing tasks rather than CNNs or RNNs. Specifically, we use a BERT network to extract text features and use the vision transformer as the image network of the model. Finally, the features are transformed into hash codes for efficient and fast retrieval. We conduct extensive experiments on Microsoft COCO (MS-COCO) and Flickr30K, comparing with baselines of some hashing methods and image-text matching methods, showing that our method has better performance.  相似文献   
54.
Models of speech perception attribute a different role to contextual information in the processing of assimilated speech. This study concerned perceptual processing of regressive voice assimilation in French. This phonological variation is asymmetric in that assimilation is partial for voiced stops and nearly complete for voiceless stops. Two auditory-visual cross-modal form priming experiments were used to examine perceptual compensation for assimilation in French words with voiceless versus voiced stop offsets. The results show that, for the former segments, assimilating context enhances underlying form recovery, whereas it does not for the latter. These results suggest that two sources of information--contextual information and bottom-up information from the assimilated forms themselves--are complementary and both come into play during the processing of fully or partially assimilated word forms. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
55.
目的 方面级多模态情感分析日益受到关注,其目的是预测多模态数据中所提及的特定方面的情感极性。然而目前的相关方法大都对方面词在上下文建模、模态间细粒度对齐的指向性作用考虑不够,限制了方面级多模态情感分析的性能。为了解决上述问题,提出一个方面级多模态协同注意图卷积情感分析模型(aspect-level multimodal co-attention graph convolutional sentiment analysis model,AMCGC)来同时建模方面指向的模态内上下文语义关联和跨模态的细粒度对齐,以提升情感分析性能。方法 AMCGC为了获得方面导向的模态内的局部语义相关性,利用正交约束的自注意力机制生成各个模态的语义图。然后,通过图卷积获得含有方面词的文本语义图表示和融入方面词的视觉语义图表示,并设计两个不同方向的门控局部跨模态交互机制递进地实现文本语义图表示和视觉语义图表示的细粒度跨模态关联互对齐,从而降低模态间的异构鸿沟。最后,设计方面掩码来选用各模态图表示中方面节点特征作为情感表征,并引入跨模态损失降低异质方面特征的差异。结果 在两个多模态数据集上与9种方法进行对比,在Twitter-2015数据集中,相比于性能第2的模型,准确率提高了1.76%;在Twitter-2017数据集中,相比于性能第2的模型,准确率提高了1.19%。在消融实验部分则从正交约束、跨模态损失、交叉协同多模态融合分别进行评估,验证了AMCGC模型各部分的合理性。结论 本文提出的AMCGC模型能更好地捕捉模态内的局部语义相关性和模态之间的细粒度对齐,提升方面级多模态情感分析的准确性。  相似文献   
56.
甲骨文字图像可以分为拓片甲骨文字与临摹甲骨文字两类. 拓片甲骨文字图像是从龟甲、兽骨等载体上获取的原始拓片图像, 临摹甲骨文字图像是经过专家手工书写得到的高清图像. 拓片甲骨文字样本难以获得, 而临摹文字样本相对容易获得. 为了提高拓片甲骨文字识别的性能, 本文提出一种基于跨模态深度度量学习的甲骨文字识别方法, 通过对临摹甲骨文字和拓片甲骨文字进行共享特征空间建模和最近邻分类, 实现了拓片甲骨文字的跨模态识别. 实验结果表明, 在拓片甲骨文字识别任务上, 本文提出的跨模态学习方法比单模态方法有明显的提升, 同时对新类别拓片甲骨文字也能增量识别.  相似文献   
57.

协同分析和处理跨模态数据一直是现代人工智能领域的难点和热点,其主要挑战是跨模态数据具有语义和异构鸿沟. 近年来,随着深度学习理论和技术的快速发展,基于深度学习的算法在图像和文本处理领域取得了极大的进步,进而产生了视觉问答(visual question answering, VQA)这一课题. VQA系统利用视觉信息和文本形式的问题作为输入,得出对应的答案,核心在于协同理解和处理视觉、文本信息. 因此,对VQA方法进行了详细综述,按照方法原理将现有的VQA方法分为数据融合、跨模态注意力和知识推理3类方法,全面总结分析了VQA方法的最新进展,介绍了常用的VQA数据集,并对未来的研究方向进行了展望.

  相似文献   
58.
Due to the storage and retrieval efficiency of hashing, as well as the highly discriminative feature extraction by deep neural networks, deep cross-modal hashing retrieval has been attracting increasing attention in recent years. However, most of existing deep cross-modal hashing methods simply employ single-label to directly measure the semantic relevance across different modalities, but neglect the potential contributions from multiple category labels. With the aim to improve the accuracy of cross-modal hashing retrieval by fully exploring the semantic relevance based on multiple labels of training data, in this paper, we propose a multi-label semantics preserving based deep cross-modal hashing (MLSPH) method. MLSPH firstly utilizes multi-labels of instances to calculate semantic similarity of the original data. Subsequently, a memory bank mechanism is introduced to preserve the multiple labels semantic similarity constraints and enforce the distinctiveness of learned hash representations over the whole training batch. Extensive experiments on several benchmark datasets reveal that the proposed MLSPH surpasses prominent baselines and reaches the state-of-the-art performance in the field of cross-modal hashing retrieval. Code is available at: https://github.com/SWU-CS-MediaLab/MLSPH.  相似文献   
59.
In this study, the authors combined the cross-modal dynamic capture task (involving the horizontal apparent movement of visual and auditory stimuli) with spatial cuing in the vertical dimension to investigate the role of spatial attention in cross-modal interactions during motion perception. Spatial attention was manipulated endogenously, either by means of a blocked design or by predictive peripheral cues, and exogenously by means of nonpredictive peripheral cues. The results of 3 experiments demonstrate a reduction in the magnitude of the cross-modal dynamic capture effect on cued trials compared with uncued trials. The introduction of neutral cues (Experiments 4 and 5) confirmed the existence of both attentional costs and benefits. This attention-related reduction in cross-modal dynamic capture was larger when a peripheral cue was used compared with when attention was oriented in a purely endogenous manner. In sum, the results suggest that spatial attention reduces illusory binding by facilitating the segregation of unimodal signals, thereby modulating audiovisual interactions in information processing. Thus, the effect of spatial attention occurs prior to or at the same time as cross-modal interactions involving motion information. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
60.
在DMN的基础上提出一种跨模态目标实例分割方法,旨在结合自然语言表达,利用不同模态信息从图像中分割所描述对象。在视觉特征提取网络DPN92中引入CBAM注意力机制,关注空间和通道上的有用信息;将BN层替换为联合BN和FRN的正则化,减少批次量和通道数对提取特征网络性能的影响,提高网络的泛化能力;在三个通用数据集ReferIt、GRef和UNC上进行仿真实验。实验结果显示,提出的引入CBAM注意力机制和联合正则化改进模型在mIou评价指标上,ReferIt和GRef上分别提升了1.85和0.52个百分点,在UNC三个验证集上分别提升了1.98、2.22和2.75个百分点。表明改进模型在预测准确度方面优于已有模型。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号