首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   391篇
  免费   168篇
  国内免费   140篇
电工技术   10篇
综合类   51篇
化学工业   15篇
金属工艺   6篇
机械仪表   18篇
建筑科学   5篇
能源动力   3篇
轻工业   7篇
水利工程   3篇
石油天然气   1篇
武器工业   1篇
无线电   78篇
一般工业技术   119篇
冶金工业   20篇
原子能技术   3篇
自动化技术   359篇
  2024年   26篇
  2023年   69篇
  2022年   50篇
  2021年   55篇
  2020年   44篇
  2019年   38篇
  2018年   41篇
  2017年   23篇
  2016年   34篇
  2015年   28篇
  2014年   25篇
  2013年   35篇
  2012年   32篇
  2011年   26篇
  2010年   11篇
  2009年   22篇
  2008年   23篇
  2007年   20篇
  2006年   11篇
  2005年   17篇
  2004年   13篇
  2003年   7篇
  2002年   6篇
  2001年   12篇
  2000年   6篇
  1999年   4篇
  1998年   8篇
  1997年   3篇
  1996年   2篇
  1995年   2篇
  1994年   1篇
  1993年   1篇
  1992年   2篇
  1991年   1篇
  1986年   1篇
排序方式: 共有699条查询结果,搜索用时 46 毫秒
1.
Fullerenes are candidates for theranostic applications because of their high photodynamic activity and intrinsic multimodal imaging contrast. However, fullerenes suffer from low solubility in aqueous media, poor biocompatibility, cell toxicity, and a tendency to aggregate. C70@lysozyme is introduced herein as a novel bioconjugate that is harmless to a cellular environment, yet is also photoactive and has excellent optical and optoacoustic contrast for tracking cellular uptake and intracellular localization. The formation, water-solubility, photoactivity, and unperturbed structure of C70@lysozyme are confirmed using UV-visible and 2D 1H, 15N NMR spectroscopy. The excellent imaging contrast of C70@lysozyme in optoacoustic and third harmonic generation microscopy is exploited to monitor its uptake in HeLa cells and lysosomal trafficking. Last, the photoactivity of C70@lysozyme and its ability to initiate cell death by means of singlet oxygen (1O2) production upon exposure to low levels of white light irradiation is demonstrated. This study introduces C70@lysozyme and other fullerene-protein conjugates as potential candidates for theranostic applications.  相似文献   
2.
Recent research indicates that by 4.5 months, infants use shape and size information as the basis for individuating objects but that it is not until 11.5 months that they use color information for this purpose. The present experiments investigated the extent to which infants' sensitivity to color information could be increased through select experiences. Five experiments were conducted with 10.5- and 9.5-month-olds. The results revealed that multimodal (visual and tactile), but not unimodal (visual only), exploration of the objects prior to the individuation task increased 10.5-month-olds' sensitivity to color differences. These results suggest that multisensory experience with objects facilitates infants' use of color information when individuating objects. In contrast, 9.5-month-olds did not benefit from the multisensory procedure; possible explanations for this finding are explored. Together, these results reveal how an everyday experience--combined visual and tactile exploration of objects--can promote infants' use of color information as the basis for individuating objects. More broadly, these results shed light on the nature of infants' object representations and the cognitive mechanisms that support infants' changing sensitivity to color differences. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
3.
本文基于普通PID控制器和人工智能的理论,针对液压电梯速度控制系统非线性和建模困难等特点。采用了一种多模式智能PID控制算法,它结合了普通PID控制器的优点。应用实例显示了这种控制方法的有效性。  相似文献   
4.
Multimodal data have the potential to explore emerging learning practices that extend human cognitive capacities. A critical issue stretching in many multimodal learning analytics (MLA) systems and studies is the current focus aimed at supporting researchers to model learner behaviours, rather than directly supporting learners. Moreover, many MLA systems are designed and deployed without learners' involvement. We argue that in order to create MLA interfaces that directly support learning, we need to gain an expanded understanding of how multimodal data can support learners' authentic needs. We present a qualitative study in which 40 computer science students were tracked in an authentic learning activity using wearable and static sensors. Our findings outline learners' curated representations about multimodal data and the non-technical challenges in using these data in their learning practice. The paper discusses 10 dimensions that can serve as guidelines for researchers and designers to create effective and ethically aware student-facing MLA innovations.  相似文献   
5.
基于CCA的人耳和侧面人脸特征融合的身份识别*   总被引:2,自引:0,他引:2  
鉴于人耳和人脸特殊的生理位置关系,从非打扰识别的角度出发,提出仅采集侧面人脸图像,利用典型相关分析的思想提取人耳和侧面人脸的关联特征,进行人耳和侧面人脸在特征层的融合.实验结果表明,此方法与单一的人耳或侧面人脸特征识别比较,识别率得到提高.  相似文献   
6.
Most existing vision-language pre-training methods focus on understanding tasks and use BERT-like loss functions (masked language modeling and image-text matching) during pre-training. Despite their good performance in the understanding of downstream tasks, such as visual question answering, image-text retrieval, and visual entailment, these methods cannot generate information. To tackle this problem, this study proposes Unified multimodal pre-training for Vision-Language understanding and generation (UniVL). The proposed UniVL is capable of handling both understanding tasks and generation tasks. It expands existing pre-training paradigms and uses random masks and causal masks simultaneously, where causal masks are triangular masks that mask future tokens, and such pre-trained models can have autoregressive generation abilities. Moreover, several vision-language understanding tasks are turned into text generation tasks according to specifications, and the prompt-based method is employed for fine-tuning of different downstream tasks. The experiments show that there is a trade-off between understanding tasks and generation tasks when the same model is used, and a feasible way to improve both tasks is to use more data. The proposed UniVL framework attains comparable performance to recent vision-language pre-training methods in both understanding tasks and generation tasks. Moreover, the prompt-based generation method is more effective and even outperforms discriminative methods in few-shot scenarios.  相似文献   
7.
当今社会已经步入了一个数字化和信息化的时代,随着人们的交流方式逐渐由文字向图像化转变,话语分析的研究范围也随之延伸到语言文字之外的图片、声音、颜色等社会符号。文章运用社会符号学框架下的视觉语法理论分析了2010年上海世博会吉祥物-海宝这个多模态语篇,说明除语言文字以外的其他社会符号也是意义的源泉。  相似文献   
8.
多式联运中运输方式与运输路径集成优化模型研究*   总被引:2,自引:0,他引:2  
运输方式和运输路径选择问题是影响多式联运时间和费用的关键问题,直接影响承运人和客户的利益。依据运输方式选择和运输路径优化的关系特点,采用主从混合智能启发式方法,构建了运输方式选择和运输路径优化集成模型,给出了粒子群—蚁群双层优化算法求解方案,解决了运输网络多节点、多方式、多路径的集成优化问题。实验结果表明,该方案优于蚁群算法和遗传算法。  相似文献   
9.
目的 胶质瘤的准确分级是辅助制定个性化治疗方案的主要手段,但现有研究大多数集中在基于肿瘤区域的分级预测上,需要事先勾画感兴趣区域,无法满足临床智能辅助诊断的实时性需求。因此,本文提出一种自适应多模态特征融合网络(adaptive multi-modal fusion net,AMMFNet),在不需要勾画肿瘤区域的情况下,实现原始采集图像到胶质瘤级别的端到端准确预测。方法 AMMFNet方法采用4个同构异义网络分支提取不同模态的多尺度图像特征;利用自适应多模态特征融合模块和降维模块进行特征融合;结合交叉熵分类损失和特征嵌入损失提高胶质瘤的分类精度。为了验证模型性能,本文采用MICCAI (Medical Image Computing and Computer Assisted Intervention Society)2018公开数据集进行训练和测试,与前沿深度学习模型和最新的胶质瘤分类模型进行对比,并采用精度以及受试者曲线下面积(area under curve,AUC)等指标进行定量分析。结果 在无需勾画肿瘤区域的情况下,本文模型预测胶质瘤分级的AUC为0.965;在使用肿瘤区域时,其AUC高达0.997,精度为0.982,比目前最好的胶质瘤分类模型——多任务卷积神经网络同比提高1.2%。结论 本文提出的自适应多模态特征融合网络,通过结合多模态、多语义级别特征,可以在未勾画肿瘤区域的前提下,准确地实现胶质瘤分级预测。  相似文献   
10.
章荪  尹春勇 《计算机应用》2021,41(6):1631-1639
针对时序多模态情感分析中存在的单模态特征表示和跨模态特征融合问题,结合多头注意力机制,提出一种基于多任务学习的情感分析模型。首先,使用卷积神经网络(CNN)、双向门控循环神经网络(BiGRU)和多头自注意力(MHSA)实现了对时序单模态的特征表示;然后,利用多头注意力实现跨模态的双向信息融合;最后,基于多任务学习思想,添加额外的情感极性分类和情感强度回归任务作为辅助,从而提升情感评分回归主任务的综合性能。实验结果表明,相较于多模态分解模型,所提模型的二分类准确度指标在CMU-MOSEI和CMU-MOSI多模态数据集上分别提高了7.8个百分点和3.1个百分点。该模型适用于多模态场景下的情感分析问题,能够为商品推荐、股市预测、舆情监控等应用提供决策支持。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号