首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   13231篇
  免费   1844篇
  国内免费   1742篇
电工技术   519篇
技术理论   4篇
综合类   884篇
化学工业   486篇
金属工艺   72篇
机械仪表   240篇
建筑科学   184篇
矿业工程   88篇
能源动力   86篇
轻工业   192篇
水利工程   100篇
石油天然气   80篇
武器工业   54篇
无线电   2537篇
一般工业技术   581篇
冶金工业   56篇
原子能技术   54篇
自动化技术   10600篇
  2024年   133篇
  2023年   379篇
  2022年   499篇
  2021年   608篇
  2020年   675篇
  2019年   493篇
  2018年   496篇
  2017年   603篇
  2016年   698篇
  2015年   833篇
  2014年   1332篇
  2013年   1119篇
  2012年   1219篇
  2011年   1134篇
  2010年   730篇
  2009年   673篇
  2008年   796篇
  2007年   844篇
  2006年   612篇
  2005年   590篇
  2004年   435篇
  2003年   425篇
  2002年   293篇
  2001年   272篇
  2000年   174篇
  1999年   163篇
  1998年   107篇
  1997年   89篇
  1996年   83篇
  1995年   57篇
  1994年   64篇
  1993年   32篇
  1992年   33篇
  1991年   22篇
  1990年   17篇
  1989年   14篇
  1988年   12篇
  1987年   9篇
  1986年   13篇
  1985年   6篇
  1984年   4篇
  1983年   5篇
  1982年   2篇
  1981年   2篇
  1980年   3篇
  1979年   4篇
  1977年   4篇
  1976年   2篇
  1972年   2篇
  1959年   1篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
51.
In this paper we focus on two complementary approaches to significantly decrease pre-training time of a deep belief network (DBN). First, we propose an adaptive step size technique to enhance the convergence of the contrastive divergence (CD) algorithm, thereby reducing the number of epochs to train the restricted Boltzmann machine (RBM) that supports the DBN infrastructure. Second, we present a highly scalable graphics processing unit (GPU) parallel implementation of the CD-k algorithm, which boosts notably the training speed. Additionally, extensive experiments are conducted on the MNIST and the HHreco databases. The results suggest that the maximum useful depth of a DBN is related to the number and quality of the training samples. Moreover, it was found that the lower-level layer plays a fundamental role for building successful DBN models. Furthermore, the results contradict the pre-conceived idea that all the layers should be pre-trained. Finally, it is shown that by incorporating multiple back-propagation (MBP) layers, the DBNs generalization capability is remarkably improved.  相似文献   
52.
Relaxation training is an application of affective computing with important implications for health and wellness. After detecting user׳s affective state through physiological sensors, a relaxation training application can provide the user with explicit feedback about his/her detected affective state. This process (biofeedback) can enable an individual to learn over time how to change his/her physiological activity for the purposes of improving health and performance. In this paper, we provide three contributions to the field of affective computing for health and wellness. First, we propose a novel application for relaxation training that combines ideas from affective computing and games. The game detects user׳s level of stress and uses it to influence the affective state and the behavior of a 3D virtual character as a form of embodied feedback. Second, we compare two algorithms for stress detection which follow two different approaches in the affective computing literature: a more practical and less costly approach that uses a single physiological sensor (skin conductance), and a potentially more accurate approach that uses four sensors (skin conductance, heart rate, muscle activity of corrugator supercilii and zygomaticus major). Third, as the central motivation of our research, we aim to improve the traditional methodology employed for comparisons in affective computing studies. To do so, we add to the study a placebo condition in which user׳s stress level, unbeknown to him/her, is determined pseudo-randomly instead of taking into account his/her physiological sensor readings. The obtained results show that only the feedback presented by the single-sensor algorithm was perceived as significantly more accurate than the placebo. If the placebo condition was not included in the study, the effectiveness of the two algorithms would have instead appeared similar. This outcome highlights the importance of using more thorough methodologies in future affective computing studies.  相似文献   
53.
In this article, an improved differential evolution algorithm (IDE) based on two different colonies is proposed and applied to time‐modulated conformal arrays syntheses. The whole population of IDE is divided into two parts. The one part searches the solution globally while the other searches the neighborhood of the solution provided by the previous one. Benchmark functions are provided to testify IDE. Furthermore, IDE is applied to synthetize sum‐difference patterns with a 1 × 16 elements time‐modulated circular array and low sidelobe level (SLL) patterns with an 8 × 12 elements time‐modulated cone array. After optimization, the sideband level (SBL) of the circular array at the first sideband frequency is ?1.00 dB. The SLL and SBL at the first sideband frequency of the cone array are lower than ?30.00 and ?20.00 dB, respectively. Experiment results verify the superior performance of IDE. Moreover, to accelerate the computation speed, graphics processing unit parallel computing technique is introduced into pattern synthesis and the acceleration ratios of more than 23 times can be achieved. © 2014 Wiley Periodicals, Inc. Int J RF and Microwave CAE 24:697–705, 2014.  相似文献   
54.
Cloud computing and virtualization technology have revolutionized general-purpose computing applications in the past decade. The cloud paradigm offers advantages through reduction of operation costs, server consolidation, flexible system configuration and elastic resource provisioning. However, despite the success of cloud computing for general-purpose computing, existing cloud computing and virtualization technology face tremendous challenges in supporting emerging soft real-time applications such as online video streaming, cloud-based gaming, and telecommunication management. These applications demand real-time performance in open, shared and virtualized computing environments. This paper identifies the technical challenges in supporting real-time applications in the cloud, surveys recent advancement in real-time virtualization and cloud computing technology, and offers research directions to enable cloud-based real-time applications in the future.  相似文献   
55.
The cloud computing introduces several changes in technology that have resulted a new ways for cloud providers to deliver their services to cloud consumers mainly in term of security risk assessment, thus, adapting a current risk assessment tools to cloud computing is a very difficult task due to its several characteristics that challenge the effectiveness of risk assessment approaches. Consequently, there is a need of risk assessment approach adapted to cloud computing. With such an approach, the cloud consumers can be guaranteed the effectiveness of data security and the cloud providers can win the trust of their cloud consumers. This paper requires the formalization of risk assessment method for conventional system as fundamental steps towards the development of flexible risk assessment approach regarding cloud consumers.  相似文献   
56.
针对电子商务中的商品评论信息过载问题,运用情感计算理论,通过挖掘商品评论信息中的商品特征及相应的情感褒贬态度,为消费者提供一个商品特征粒度上的情感分析结果,从而帮助消费者从庞杂的商品评论中快速获取有效信息。系统首先采集指定商品的评论集并挖掘商品特征,然后结合情感语料库和词汇相似度计算,利用依存关系找到特征-极性词对以及程度副词和否定词。基于以上结果,考虑程度副词的强度,以及程度副词和否定词共现语序不同造成的语义差异,提出了商品评论情感倾向程度的计算方式。最后,进行系统实现并验证算法的有效性。实验结果表明,系统具有良好的应用效果。  相似文献   
57.
大数据系统和分析技术综述   总被引:15,自引:0,他引:15  
首先根据处理形式的不同,介绍了不同形式数据的特征和各自的典型应用场景以及相应的代表性处理系统,总结了大数据处理系统的三大发展趋势;随后,对系统支撑下的大数据分析技术和应用(包括深度学习、知识计算、社会计算与可视化等)进行了简要综述,总结了各种技术在大数据分析理解过程中的关键作用;最后梳理了大数据处理和分析面临的数据复杂性、计算复杂性和系统复杂性挑战,并逐一提出了可能的应对之策.  相似文献   
58.
随着社交网络和语义Web等数据应用的兴起,催生了许多图数据处理产品,包括Neo4j,Hyper Graph DB等,然而这些产品在设计时并未充分考虑图应用对数据可用性和可扩展性的更高要求。为此,提出一种基于分布式内存云的图引擎底层建模和存储解决方案。在内存云上搭建分布式键值引擎,进而在键值存储的基础上对图的数据进行建模和读写。在大规模数据集上的实验结果表明,该方案具有较好的图随机访问性能,并能够高效地支持海量规模的图数据应用。  相似文献   
59.
汤颖  刘晓哲  张宏鑫 《计算机科学》2014,41(12):238-244,259
大规模的云渲染技术带来了大量的三维图形渲染数据。为了减小集群渲染产生的图像序列数据的传输以及存储代价,针对渲染图像序列低熵的特点,基于字典编码技术提出了降低数据局部复杂性的无损数据压缩方案。该方案通过数据重排技术来大大提高数据的局部冗余度,从而提高数据无损压缩效率。为了进一步解决大规模图像序列的压缩耗时问题,提出了一种云计算平台上的分布式图像压缩处理方案,充分利用现有云计算中Map/Reduce计算模型实现了分布式编码方案。实验结果证明,对于渲染产生的大规模低熵图像序列,提出的方案能够有效提高编码率并减少编码时间。  相似文献   
60.
陈风  田雨波  杨敏 《计算机科学》2014,41(9):263-268
应用图形处理器(GPU)来加速粒子群优化(PSO)算法并行计算时,为突出其加速性能,经常有文献以恶化CPU端PSO算法性能为代价。为了科学比较GPU-PSO算法和CPU-PSO算法的性能,提出用"有效加速比"作为算法的性能指标。文中给出的评价方法不需要CPU和GPU端粒子数相同,将GPU并行算法与最优CPU串行算法的性能作比较,以加速收敛到目标精度为准则,在统一计算设备架构(CUDA)下对多个基准测试函数进行了数值仿真实验。结果表明,在GPU上大幅增加粒子数能够加速PSO算法收敛到目标精度,与CPU-PSO相比,获得了10倍以上的"有效加速比"。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号