首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   436篇
  免费   179篇
  国内免费   143篇
电工技术   18篇
综合类   59篇
化学工业   24篇
金属工艺   7篇
机械仪表   23篇
建筑科学   9篇
矿业工程   1篇
能源动力   4篇
轻工业   10篇
水利工程   3篇
石油天然气   4篇
武器工业   1篇
无线电   80篇
一般工业技术   122篇
冶金工业   20篇
原子能技术   3篇
自动化技术   370篇
  2024年   33篇
  2023年   69篇
  2022年   55篇
  2021年   55篇
  2020年   44篇
  2019年   38篇
  2018年   41篇
  2017年   25篇
  2016年   39篇
  2015年   38篇
  2014年   33篇
  2013年   38篇
  2012年   33篇
  2011年   30篇
  2010年   12篇
  2009年   25篇
  2008年   23篇
  2007年   21篇
  2006年   11篇
  2005年   17篇
  2004年   14篇
  2003年   7篇
  2002年   10篇
  2001年   13篇
  2000年   6篇
  1999年   4篇
  1998年   8篇
  1997年   3篇
  1996年   2篇
  1995年   4篇
  1994年   1篇
  1993年   2篇
  1992年   2篇
  1991年   1篇
  1986年   1篇
排序方式: 共有758条查询结果,搜索用时 15 毫秒
1.
Fullerenes are candidates for theranostic applications because of their high photodynamic activity and intrinsic multimodal imaging contrast. However, fullerenes suffer from low solubility in aqueous media, poor biocompatibility, cell toxicity, and a tendency to aggregate. C70@lysozyme is introduced herein as a novel bioconjugate that is harmless to a cellular environment, yet is also photoactive and has excellent optical and optoacoustic contrast for tracking cellular uptake and intracellular localization. The formation, water-solubility, photoactivity, and unperturbed structure of C70@lysozyme are confirmed using UV-visible and 2D 1H, 15N NMR spectroscopy. The excellent imaging contrast of C70@lysozyme in optoacoustic and third harmonic generation microscopy is exploited to monitor its uptake in HeLa cells and lysosomal trafficking. Last, the photoactivity of C70@lysozyme and its ability to initiate cell death by means of singlet oxygen (1O2) production upon exposure to low levels of white light irradiation is demonstrated. This study introduces C70@lysozyme and other fullerene-protein conjugates as potential candidates for theranostic applications.  相似文献   
2.
Recent research indicates that by 4.5 months, infants use shape and size information as the basis for individuating objects but that it is not until 11.5 months that they use color information for this purpose. The present experiments investigated the extent to which infants' sensitivity to color information could be increased through select experiences. Five experiments were conducted with 10.5- and 9.5-month-olds. The results revealed that multimodal (visual and tactile), but not unimodal (visual only), exploration of the objects prior to the individuation task increased 10.5-month-olds' sensitivity to color differences. These results suggest that multisensory experience with objects facilitates infants' use of color information when individuating objects. In contrast, 9.5-month-olds did not benefit from the multisensory procedure; possible explanations for this finding are explored. Together, these results reveal how an everyday experience--combined visual and tactile exploration of objects--can promote infants' use of color information as the basis for individuating objects. More broadly, these results shed light on the nature of infants' object representations and the cognitive mechanisms that support infants' changing sensitivity to color differences. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
3.
Multimodal data have the potential to explore emerging learning practices that extend human cognitive capacities. A critical issue stretching in many multimodal learning analytics (MLA) systems and studies is the current focus aimed at supporting researchers to model learner behaviours, rather than directly supporting learners. Moreover, many MLA systems are designed and deployed without learners' involvement. We argue that in order to create MLA interfaces that directly support learning, we need to gain an expanded understanding of how multimodal data can support learners' authentic needs. We present a qualitative study in which 40 computer science students were tracked in an authentic learning activity using wearable and static sensors. Our findings outline learners' curated representations about multimodal data and the non-technical challenges in using these data in their learning practice. The paper discusses 10 dimensions that can serve as guidelines for researchers and designers to create effective and ethically aware student-facing MLA innovations.  相似文献   
4.
基于CCA的人耳和侧面人脸特征融合的身份识别*   总被引:2,自引:0,他引:2  
鉴于人耳和人脸特殊的生理位置关系,从非打扰识别的角度出发,提出仅采集侧面人脸图像,利用典型相关分析的思想提取人耳和侧面人脸的关联特征,进行人耳和侧面人脸在特征层的融合.实验结果表明,此方法与单一的人耳或侧面人脸特征识别比较,识别率得到提高.  相似文献   
5.
Most existing vision-language pre-training methods focus on understanding tasks and use BERT-like loss functions (masked language modeling and image-text matching) during pre-training. Despite their good performance in the understanding of downstream tasks, such as visual question answering, image-text retrieval, and visual entailment, these methods cannot generate information. To tackle this problem, this study proposes Unified multimodal pre-training for Vision-Language understanding and generation (UniVL). The proposed UniVL is capable of handling both understanding tasks and generation tasks. It expands existing pre-training paradigms and uses random masks and causal masks simultaneously, where causal masks are triangular masks that mask future tokens, and such pre-trained models can have autoregressive generation abilities. Moreover, several vision-language understanding tasks are turned into text generation tasks according to specifications, and the prompt-based method is employed for fine-tuning of different downstream tasks. The experiments show that there is a trade-off between understanding tasks and generation tasks when the same model is used, and a feasible way to improve both tasks is to use more data. The proposed UniVL framework attains comparable performance to recent vision-language pre-training methods in both understanding tasks and generation tasks. Moreover, the prompt-based generation method is more effective and even outperforms discriminative methods in few-shot scenarios.  相似文献   
6.
当今社会已经步入了一个数字化和信息化的时代,随着人们的交流方式逐渐由文字向图像化转变,话语分析的研究范围也随之延伸到语言文字之外的图片、声音、颜色等社会符号。文章运用社会符号学框架下的视觉语法理论分析了2010年上海世博会吉祥物-海宝这个多模态语篇,说明除语言文字以外的其他社会符号也是意义的源泉。  相似文献   
7.
多式联运中运输方式与运输路径集成优化模型研究*   总被引:2,自引:0,他引:2  
运输方式和运输路径选择问题是影响多式联运时间和费用的关键问题,直接影响承运人和客户的利益。依据运输方式选择和运输路径优化的关系特点,采用主从混合智能启发式方法,构建了运输方式选择和运输路径优化集成模型,给出了粒子群—蚁群双层优化算法求解方案,解决了运输网络多节点、多方式、多路径的集成优化问题。实验结果表明,该方案优于蚁群算法和遗传算法。  相似文献   
8.
目的 胶质瘤的准确分级是辅助制定个性化治疗方案的主要手段,但现有研究大多数集中在基于肿瘤区域的分级预测上,需要事先勾画感兴趣区域,无法满足临床智能辅助诊断的实时性需求。因此,本文提出一种自适应多模态特征融合网络(adaptive multi-modal fusion net,AMMFNet),在不需要勾画肿瘤区域的情况下,实现原始采集图像到胶质瘤级别的端到端准确预测。方法 AMMFNet方法采用4个同构异义网络分支提取不同模态的多尺度图像特征;利用自适应多模态特征融合模块和降维模块进行特征融合;结合交叉熵分类损失和特征嵌入损失提高胶质瘤的分类精度。为了验证模型性能,本文采用MICCAI (Medical Image Computing and Computer Assisted Intervention Society)2018公开数据集进行训练和测试,与前沿深度学习模型和最新的胶质瘤分类模型进行对比,并采用精度以及受试者曲线下面积(area under curve,AUC)等指标进行定量分析。结果 在无需勾画肿瘤区域的情况下,本文模型预测胶质瘤分级的AUC为0.965;在使用肿瘤区域时,其AUC高达0.997,精度为0.982,比目前最好的胶质瘤分类模型——多任务卷积神经网络同比提高1.2%。结论 本文提出的自适应多模态特征融合网络,通过结合多模态、多语义级别特征,可以在未勾画肿瘤区域的前提下,准确地实现胶质瘤分级预测。  相似文献   
9.
社交网络的发展为情感分析研究提供了大量的多模态数据。结合多模态内容进行情感分类可以利用模态间数据的关联信息,从而避免单一模态对总体情感把握不全面的情况。使用简单的共享表征学习方法无法充分挖掘模态间的互补特征,因此提出多模态双向注意力融合(Multimodal Bidirectional Attention Hybrid, MBAH)模型,在深度模型提取的图像和文本特征基础上,利用双向注意力机制在一个模态下引入另一个模态信息,将该模态的底层特征与另一模态语义特征通过注意力计算学习模态间的关联信息,然后联结两种模态的高层特征形成跨模态共享表征并输入多层感知器得到分类结果。此外MBAH模型应用后期融合技术结合图文单模态自注意力模型搜寻最优决策权值,形成最终决策。实验结果表明,MBAH模型情感分类结果相较于其他方法具有明显的提升。  相似文献   
10.
To improve the performance of the particle swarm optimization algorithm, the optimal network of the particle age structure with stagnation information is designed, and the information about this network is used to adaptively change the three key parameters of the particle swarm optimization algorithm. At the same time, an adaptive particle swarm optimization method with stagnancy information is proposed and specific optimization steps of this method are given. Four classical low and high dimension benchmark test functions are used to validate the performance of the optimization method, and a comparison study is made with gravitational search algorithm and the traditional particle swarm optimization algorithm without stagnancy information. The comparison study shows that the search efficiency of the proposed method is 2 times higher than that of other methods in the literature in the case of low dimensional multimodal functions. When the dimension of functions is higher than 2, the search efficiency of the proposed method is almost the same as that of other methods, but with the better ability to achieve global solution and local solutions, and the higher solving precision.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号