首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   12226篇
  免费   1998篇
  国内免费   1404篇
电工技术   498篇
技术理论   4篇
综合类   843篇
化学工业   102篇
金属工艺   59篇
机械仪表   224篇
建筑科学   154篇
矿业工程   83篇
能源动力   77篇
轻工业   26篇
水利工程   98篇
石油天然气   71篇
武器工业   54篇
无线电   2545篇
一般工业技术   464篇
冶金工业   42篇
原子能技术   43篇
自动化技术   10241篇
  2024年   33篇
  2023年   334篇
  2022年   456篇
  2021年   544篇
  2020年   680篇
  2019年   713篇
  2018年   411篇
  2017年   490篇
  2016年   548篇
  2015年   720篇
  2014年   1245篇
  2013年   1021篇
  2012年   1185篇
  2011年   1086篇
  2010年   694篇
  2009年   649篇
  2008年   770篇
  2007年   807篇
  2006年   582篇
  2005年   552篇
  2004年   423篇
  2003年   379篇
  2002年   282篇
  2001年   244篇
  2000年   154篇
  1999年   139篇
  1998年   89篇
  1997年   70篇
  1996年   65篇
  1995年   48篇
  1994年   51篇
  1993年   27篇
  1992年   32篇
  1991年   18篇
  1990年   16篇
  1989年   13篇
  1988年   12篇
  1987年   7篇
  1986年   9篇
  1985年   5篇
  1984年   3篇
  1983年   4篇
  1982年   2篇
  1980年   3篇
  1979年   3篇
  1978年   1篇
  1977年   4篇
  1976年   1篇
  1972年   2篇
  1959年   1篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
1.
大数据场景下,远程云服务器通常被部署用于数据处理与价值挖掘,但在面对时延敏感型或需要动态频繁交互的业务时,该种处理模式显得力不从心.作为对云计算模式的补充,雾计算因其可有效降低任务处理时延、能耗与带宽消耗而备受关注;同时,面向雾计算的计算迁移机制因其能有效缓解节点的处理负担并改善用户体验而成为领域研究焦点.在雾计算模式下,为了更好地满足计算密集型任务对时延与能耗的要求,基于区块链赋能物联网场景,本文提出了一种联合资源分配与控制的智能计算迁移方案.具体地,规划了一个在时延、能耗与资源约束下的最小化所有任务完成总成本的优化问题,其总成本构成综合考量了时延、能耗和挖掘成本,通过对通信、计算资源与迁移决策的联合优化,实现总成本的最小化.为完成任务迁移,终端以矿工的身份向雾节点挖掘(租借)计算资源,所提出的基于区块链技术的激励机制可充分调动终端和雾节点参与计算迁移的积极性并保障交易过程的安全性,设计的奖励分配规则可保证成功挖掘资源终端收获奖励的公平性.为解决上述规划的优化问题(即混合整数非线性规划问题),提出了一个联合通信、计算与控制的智能计算迁移算法,该算法融合深度确定性策略梯度算法思想,设计了基于反梯度更新的双“行动者-评论家”神经网络结构,使训练过程更加稳定并易于收敛;同时,通过对连读动作输出进行概率离散化运算,使其更加适用于混合整数非线性规划问题的求解.最后,仿真结果表明本文方案能以较快的速度收敛,且与其他三种基准方案相比,本文方案的总成本最低,例如,与其中性能最好的基于深度Q学习网络的计算迁移方案相比,总成本平均可降低15.2%.  相似文献   
2.
The tremendous development of cloud computing with related technologies is an unexpected one. However, centralized cloud storage faces few challenges such as latency, storage, and packet drop in the network. Cloud storage gets more attention due to its huge data storage and ensures the security of secret information. Most of the developments in cloud storage have been positive except better cost model and effectiveness, but still data leakage in security are billion-dollar questions to consumers. Traditional data security techniques are usually based on cryptographic methods, but these approaches may not be able to withstand an attack from the cloud server's interior. So, we suggest a model called multi-layer storage (MLS) based on security using elliptical curve cryptography (ECC). The suggested model focuses on the significance of cloud storage along with data protection and removing duplicates at the initial level. Based on divide and combine methodologies, the data are divided into three parts. Here, the first two portions of data are stored in the local system and fog nodes to secure the data using the encoding and decoding technique. The other part of the encrypted data is saved in the cloud. The viability of our model has been tested by research in terms of safety measures and test evaluation, and it is truly a powerful complement to existing methods in cloud storage.  相似文献   
3.
The advantages of a cloud computing service are cost advantages, availability, scalability, flexibility, reduced time to market, and dynamic access to computing resources. Enterprises can improve the successful adoption rate of cloud computing services if they understand the critical factors. To find critical factors, this study first reviewed the literature and established a three-layer hierarchical factor table for adopting a cloud computing service based on the Technology-Organization-Environment framework. Then, a hybrid method that combines two multi-criteria decision-making tools—called the Fuzzy Analytic Network Process method and the concept of VlseKriterijumska Optimizacija I Kompromisno Resenje acceptable advantage—was used to objectively identify critical factors for the adoption of a cloud computing service, replacing the subjective decision of the authors. The results of this study determined five critical factors, namely data access security, information transmission security, senior management support, fallback cloud management, and employee acceptance. Finally, the paper presents the findings and implications of the study.  相似文献   
4.
Edge Computing is one of the radically evolving systems through generations as it is able to effectively meet the data saving standards of consumers, providers and the workers. Requisition for Edge Computing based items have been increasing tremendously. Apart from the advantages it holds, there remain lots of objections and restrictions, which hinders it from accomplishing the need of consumers all around the world. Some of the limitations are constraints on computing and hardware, functions and accessibility, remote administration and connectivity. There is also a backlog in security due to its inability to create a trust between devices involved in encryption and decryption. This is because security of data greatly depends upon faster encryption and decryption in order to transfer it. In addition, its devices are considerably exposed to side channel attacks, including Power Analysis attacks that are capable of overturning the process. Constrained space and the ability of it is one of the most challenging tasks. To prevail over from this issue we are proposing a Cryptographic Lightweight Encryption Algorithm with Dimensionality Reduction in Edge Computing. The t-Distributed Stochastic Neighbor Embedding is one of the efficient dimensionality reduction technique that greatly decreases the size of the non-linear data. The three dimensional image data obtained from the system, which are connected with it, are dimensionally reduced, and then lightweight encryption algorithm is employed. Hence, the security backlog can be solved effectively using this method.  相似文献   
5.
Most user authentication mechanisms of cloud systems depend on the credentials approach in which a user submits his/her identity through a username and password. Unfortunately, this approach has many security problems because personal data can be stolen or recognized by hackers. This paper aims to present a cloud-based biometric authentication model (CBioAM) for improving and securing cloud services. The research study presents the verification and identification processes of the proposed cloud-based biometric authentication system (CBioAS), where the biometric samples of users are saved in database servers and the authentication process is implemented without loss of the users’ information. The paper presents the performance evaluation of the proposed model in terms of three main characteristics including accuracy, sensitivity, and specificity. The research study introduces a novel algorithm called “Bio_Authen_as_a_Service” for implementing and evaluating the proposed model. The proposed system performs the biometric authentication process securely and preserves the privacy of user information. The experimental result was highly promising for securing cloud services using the proposed model. The experiments showed encouraging results with a performance average of 93.94%, an accuracy average of 96.15%, a sensitivity average of 87.69%, and a specificity average of 97.99%.  相似文献   
6.
Identity management is based on the creation and management of user identities for granting access to the cloud resources based on the user attributes. The cloud identity and access management (IAM) grants the authorization to the end-users to perform different actions on the specified cloud resources. The authorizations in the IAM are grouped into roles instead of granting them directly to the end-users. Due to the multiplicity of cloud locations where data resides and due to the lack of a centralized user authority for granting or denying cloud user requests, there must be several security strategies and models to overcome these issues. Another major concern in IAM services is the excessive or the lack of access level to different users with previously granted authorizations. This paper proposes a comprehensive review of security services and threats. Based on the presented services and threats, advanced frameworks for IAM that provide authentication mechanisms in public and private cloud platforms. A threat model has been applied to validate the proposed authentication frameworks with different security threats. The proposed models proved high efficiency in protecting cloud platforms from insider attacks, single sign-on failure, brute force attacks, denial of service, user privacy threats, and data privacy threats.  相似文献   
7.
For businesses to benefit from the many opportunities of cloud computing, they must first address a number of security challenges, such as the potential leakage of confidential data to unintended third parties. An inter-VM (where VM is virtual machine) attack, also known as cross-VM attack, is one threat through which cloud-hosted confidential data could be leaked to unintended third parties. An inter-VM attack exploits vulnerabilities between co-resident guest VMs that share the same cloud infrastructure. In an attempt to stop such an attack, this paper uses the principles of logical analysis to model a solution that provides physical separation of VMs belonging to conflicting tenants based on their levels of conflict. The derived mathematical model is founded on scientific principles and implemented using four conflict-aware VM placement algorithms. The resultant algorithms consider a tenant's risk appetite and cost implications. The model offers guidance to VM placement and is validated using a proof of concept. A cloud simulation tool was used to test and evaluate the effectiveness and efficiency of the model. The findings reflect that the introduction of the proposed model introduced a time lag in the time it took to place VM instances. On top of this, it was also discovered that the number and size of the VM instances has an effect on the VM placement performance. The findings further illustrate that the conflict tolerance level of a VM has a direct impact on the time it took to place.  相似文献   
8.
超密集网络与边缘计算相结合时,高密度的基站分布可能会对同一用户重复覆盖,该用户选择不同基站进行卸载将会对系统性能产生不同影响,由此引出卸载对象选取问题。同时边缘计算可以将部分任务卸载到边缘服务器进行处理,选择合适的卸载比例能够显著降低所需的时延和能耗,由此引出卸载比例选取问题。提出一种超密集网络环境中基于博弈论和启发式算法的边缘计算卸载策略。针对卸载对象选取问题,根据边缘服务器到用户之间的距离和工作负载定义偏好度指标,各用户根据偏好度进行博弈后选择卸载对象,并对用户进行分组,将原问题分解为若干个并行的子问题。针对卸载比例选取问题,基于萤火虫群优化算法对各用户的卸载比例进行优化,得到适当的卸载比例。与全本地处理(ALP)策略、全卸载策略(AOS)和基于粒子群优化(PSO)算法的卸载策略进行对比,实验结果表明,ALP和AOS策略在总能耗和平均时延上具有一定的局限性,相比基于PSO的卸载策略,所提策略的时延降低22%,能耗降低20%,可以有效减少系统损失。  相似文献   
9.
高光谱图像分类算法通常需要逐点对图像中的像素点进行迭代处理,计算复杂度及并行程度存在较大差异。随着高光谱遥感图像空间、光谱和辐射分辨率的不断提升,这些算法无法满足实时处理海量遥感图像数据的需求。通过分析NPU存储计算一体化模式与遥感图像分类算法的实现步骤,设计低功耗CPU+NPU异构资源计算架构的低秩稀疏子空间聚类(LRSSC)算法,将数据密集型计算转移至NPU,并利用NPU数据驱动并行计算和内置AI加速,对基于机器学习算法的海量遥感数据进行实时分类。受到big.LITTLE计算范式的启发,CPU+NPU异构资源计算架构由8 bit和低精度位宽NPU共同组成以提高整体吞吐量,同时减少图网络推理过程中的能量损耗。实验结果表明,与CPU计算架构和CPU+GPU异构计算架构的LRSSC算法相比,CPU+NPU异构计算架构的LRSSC算法在Pavia University遥感数据集下的计算速度提升了3~14倍。  相似文献   
10.
E级计算机系统规模巨大,使得故障异常总量随之增多,导致诊断发现的难度增加,因此,迫切需要一套更加准确高效的实时维护故障诊断系统,对硬件系统进行全面的异常及故障信息实时检测、故障诊断及故障预测。传统故障诊断系统在面对数万节点规模的诊断时存在执行效率低、异常检测误报率高的问题,异常检测及故障诊断的覆盖率不足。对异常及故障检测、故障诊断与故障预测相关技术进行研究,分析技术原理及适用性,并结合E级高性能计算机实际工程需求,设计一套满足数E级高性能计算机需求的维护故障诊断系统。基于维护系统的结构组成设计可扩展的边缘诊断架构,将高性能计算机系统知识、专家知识与数理统计、机器学习相融合给出故障检测、诊断及预测算法,并针对专用场景建立预测模型。实验结果表明,该系统具有较好的可扩展性,能在10 s内完成对十万个节点规模系统的故障诊断,与传统故障诊断系统相比,异常检测某特定指标误报率从3.3%降低到几乎为0,硬件故障检测覆盖率从90.2%提升至96%以上,硬件故障诊断覆盖率从71%提升至约94%,能较准确地预测多个重要应用场景下的故障。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号