首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   12658篇
  免费   1811篇
  国内免费   1756篇
电工技术   510篇
技术理论   4篇
综合类   873篇
化学工业   104篇
金属工艺   59篇
机械仪表   228篇
建筑科学   160篇
矿业工程   83篇
能源动力   76篇
轻工业   28篇
水利工程   99篇
石油天然气   72篇
武器工业   54篇
无线电   2574篇
一般工业技术   495篇
冶金工业   50篇
原子能技术   43篇
自动化技术   10713篇
  2024年   133篇
  2023年   376篇
  2022年   481篇
  2021年   572篇
  2020年   658篇
  2019年   479篇
  2018年   472篇
  2017年   583篇
  2016年   673篇
  2015年   805篇
  2014年   1298篇
  2013年   1079篇
  2012年   1204篇
  2011年   1120篇
  2010年   719篇
  2009年   666篇
  2008年   785篇
  2007年   816篇
  2006年   592篇
  2005年   556篇
  2004年   426篇
  2003年   388篇
  2002年   284篇
  2001年   248篇
  2000年   160篇
  1999年   147篇
  1998年   96篇
  1997年   78篇
  1996年   68篇
  1995年   48篇
  1994年   51篇
  1993年   27篇
  1992年   32篇
  1991年   18篇
  1990年   16篇
  1989年   13篇
  1988年   12篇
  1987年   7篇
  1986年   9篇
  1985年   5篇
  1984年   3篇
  1983年   4篇
  1982年   2篇
  1980年   3篇
  1979年   3篇
  1978年   1篇
  1977年   4篇
  1976年   1篇
  1972年   2篇
  1959年   1篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
91.
针对现有三维模型消隐方法面向大规模三维场景模型应用中存在的计算复杂、耗时长等缺陷, 本文提出了基于改进 Z-buffer 算法对大型变电站场景消隐的快速可视化方法。首先,为了简化计算,将场景模 型数据整合并重构;其次,通过透视投影变换将变电场景模型像素化;进一步,基于 Z-buffer 算法高效的像素 化计算特性提出了快速模型筛选方法,从而得到变电场景的子模型遮挡关系。最后,实验中将所得遮挡关系列 表融合现有消隐算法,结果表明本文提出的方法能够大幅度提升消隐的运算性能。  相似文献   
92.
罗峰 《计算机应用研究》2020,37(11):3417-3421
在无线体域网(WBAN)的身份认证中,针对原方案通◢信传感器节点可追踪和不具备匿名性的缺点,提出一种改进双跳身份认证方案,并保留了原方案的高效运算特性。引入了二级节点N的保密密钥和核心节点HN的身份验证参数两个独立的保密参数,并让二级节点N的保密密钥独立。并与原方案保密值组成三个保密值,通过三个保密值来确保◣参数的保密性和新鲜性。安全性分析和BAN逻辑表明所提方案具备不可追踪性和匿名性,且计算成本与原方案相近,通信成本更低,存储成本略有上升。所提方案是原方案的有效改进。  相似文献   
93.
概率粗糙集三支决策是不确定问题求解的一种重要理论,流计算模式是一种新型的动态内存计算形式,实施流计算模式下三支决策的快速动态计算是一项具有挑战性的新议题。本研究以流计算模式中的两个核心计算步骤即动态增量与动态减量作为研究对象,提出了一种流计算模式下概率粗糙集三支决策域的快速动态学习方法。首先对流计算模式中三支决策动态增量和动态减量的不同变化情况进行了数据建模。然后基于不同数据变化情况分别讨论了数据增量与数据减量时三支决策域的变化推理,并且基于上述理论给出了流计算模式下的三支决策动态增减学习算法。该算法能够以更低的时间复杂度获得与经典三支决策算法相同决策效果。最后通过八种UCI数据集的实验证明了流计算模式下三支决策动态增减学习算法在时间消耗上明显优于经典概率粗糙集三支决策算法,并且在不同阈值下具有稳定的决策效率。本研究表明了流计算模式下三支决策快速计算是可行的。  相似文献   
94.
When developing a complex, multi‐authored code, daily testing on multiple platforms and under a variety of conditions is essential. It is therefore necessary to have a regression test suite that is easily administered and configured, as well as a way to easily view and interpret the test suite results. We describe the methodology for verification of FLASH, a highly capable multiphysics scientific application code with a wide user base. The methodology uses a combination of unit and regression tests and an in‐house testing software that is optimized for operation under limited resources. Although our practical implementations do not always comply with theoretical regression‐testing research, our methodology provides a comprehensive verification of a large scientific code under resource constraints.Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   
95.
Cloud computing is an emerging technology in which information technology resources are virtualized to users in a set of computing resources on a pay‐per‐use basis. It is seen as an effective infrastructure for high performance applications. Divisible load applications occur in many scientific and engineering applications. However, dividing an application and deploying it in a cloud computing environment face challenges to obtain an optimal performance due to the overheads introduced by the cloud virtualization and the supporting cloud middleware. Therefore, we provide results of series of extensive experiments in scheduling divisible load application in a Cloud environment to decrease the overall application execution time considering the cloud networking and computing capacities presented to the application's user. We experiment with real applications within the Amazon cloud computing environment. Our extensive experiments analyze the reasons of the discrepancies between a theoretical model and the reality and propose adequate solutions. These discrepancies are due to three factors: the network behavior, the application behavior and the cloud computing virtualization. Our results show that applying the algorithm result in a maximum ratio of 1.41 of the measured normalized makespan versus the ideal makespan for application in which the communication to computation ratio is big. They show that the algorithm is effective for those applications in a heterogeneous setting reaching a ratio of 1.28 for large data sets. For application following the ensemble clustering model in which the computation to communication ratio is big and variable, we obtained a maximum ratio of 4.7 for large data set and a ratio of 2.11 for small data set. Applying the algorithm also results in an important speedup. These results are revealing for the type of applications we consider under experiments. The experiments also reveal the impact of the choice of the platforms provided by Amazon on the performance of the applications under study. Considering the emergence of cloud computing for high performance applications, the results in this paper can be widely adopted by cloud computing developers. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   
96.
Today, almost everyone is connected to the Internet and uses different Cloud solutions to store, deliver and process data. Cloud computing assembles large networks of virtualized services such as hardware and software resources. The new era in which ICT penetrated almost all domains (healthcare, aged-care, social assistance, surveillance, education, etc.) creates the need of new multimedia content-driven applications. These applications generate huge amount of data, require gathering, processing and then aggregation in a fault-tolerant, reliable and secure heterogeneous distributed system created by a mixture of Cloud systems (public/private), mobile devices networks, desktop-based clusters, etc. In this context dynamic resource provisioning for Big Data application scheduling became a challenge in modern systems. We proposed a resource-aware hybrid scheduling algorithm for different types of application: batch jobs and workflows. The proposed algorithm considers hierarchical clustering of the available resources into groups in the allocation phase. Task execution is performed in two phases: in the first, tasks are assigned to groups of resources and in the second phase, a classical scheduling algorithm is used for each group of resources. The proposed algorithm is suitable for Heterogeneous Distributed Computing, especially for modern High-Performance Computing (HPC) systems in which applications are modeled with various requirements (both IO and computational intensive), with accent on data from multimedia applications. We evaluate their performance in a realistic setting of CloudSim tool with respect to load-balancing, cost savings, dependency assurance for workflows and computational efficiency, and investigate the computing methods of these performance metrics at runtime.  相似文献   
97.
Simulation has become a commonly employed first step in evaluating novel approaches towards resource allocation and task scheduling on distributed architectures. However, existing simulators fall short in their modeling of the instability common to shared computational infrastructure, such as public clouds. In this work, we present DynamicCloudSim which extends the popular simulation toolkit CloudSim with several factors of instability, including inhomogeneity and dynamic changes of performance at runtime as well as failures during task execution. As a validation of the introduced functionality, we simulate the impact of instability on scientific workflow scheduling by assessing and comparing the performance of four schedulers in the course of several experiments both in simulation and on real cloud infrastructure. Results indicate that our model seems to adequately capture the most important aspects of cloud performance instability. The source code of DynamicCloudSim and the examined schedulers is available at https://code.google.com/p/dynamiccloudsim/.  相似文献   
98.
云计算身份认证模型研究   总被引:2,自引:0,他引:2  
云计算是在继承和融合众多技术基础上的一个突破性创新,已成为当前应用和研究的重点与热点。其中,云用户与云服务之间以及云平台中不同系统之间的身份认证与资源授权是确保云计算安全性的前提。在简要介绍云计算信息基础架构的基础上,针对云计算统一身份认证的特点和要求,综合分析了SAML2.0、OAuth2.0和Open ID2.0等技术规范的功能特点,提出了一种开放标准的云计算身份认证模型,为云计算中逻辑安全域的形成与管理提供了参考。  相似文献   
99.
基于云计算的混合并行遗传算法求解最短路径   总被引:2,自引:0,他引:2  
为提高最短路径求解问题的效率,提出一种基于云计算的细粒度混合并行遗传算法求解最短路径的方法。方法采用云计算中H adoop的Map Reduce并行编程模型,提高编码效率,同时将细粒度并行遗传算法和禁忌搜索算法结合,提高了寻优算法的计算速度和局部寻优能力,进而提高最短路径的求解效率。仿真结果表明,该方法在计算速度和性能上优于经典遗传算法和并行遗传算法,是一种有效的最短路径求解方法。  相似文献   
100.
基于Hadoop的云端异常流量检测与分析平台   总被引:6,自引:0,他引:6  
Hadoop系统作为一种开源的分布式云计算平台已获得广泛应用,但其云端易受到各种威胁和攻击,基于此,开发了一种基于Hadoop的云端异常流量检查与分析平台。首先,使用Mapper周期性地从所有存储流量信息的文件中提取流量的部分信息;然后,通过Reducer将异常流量提取并保存。通过对流量数据的存储、检测与分析可成功地检测出有威胁的攻击,从而保障云端的安全。由于本平台基于开源的Hadoop实现,因此成本较低;同时,基于Java语言实现,可成功移植于各种主流操作系统,具有广泛适用性。基于局域网进行监控试验,结果表明本平台可成功地检测出异常流量,并输出友好的用户界面。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号