全文获取类型
收费全文 | 537篇 |
免费 | 59篇 |
国内免费 | 30篇 |
专业分类
电工技术 | 2篇 |
综合类 | 10篇 |
金属工艺 | 1篇 |
建筑科学 | 12篇 |
水利工程 | 2篇 |
武器工业 | 1篇 |
无线电 | 110篇 |
一般工业技术 | 1篇 |
自动化技术 | 487篇 |
出版年
2022年 | 7篇 |
2021年 | 7篇 |
2020年 | 6篇 |
2019年 | 3篇 |
2018年 | 4篇 |
2017年 | 13篇 |
2016年 | 21篇 |
2015年 | 18篇 |
2014年 | 29篇 |
2013年 | 31篇 |
2012年 | 36篇 |
2011年 | 50篇 |
2010年 | 25篇 |
2009年 | 42篇 |
2008年 | 37篇 |
2007年 | 44篇 |
2006年 | 43篇 |
2005年 | 34篇 |
2004年 | 30篇 |
2003年 | 30篇 |
2002年 | 27篇 |
2001年 | 15篇 |
2000年 | 12篇 |
1999年 | 9篇 |
1998年 | 10篇 |
1997年 | 10篇 |
1996年 | 6篇 |
1995年 | 7篇 |
1994年 | 5篇 |
1993年 | 4篇 |
1992年 | 1篇 |
1991年 | 3篇 |
1990年 | 2篇 |
1989年 | 1篇 |
1988年 | 1篇 |
1985年 | 1篇 |
1981年 | 2篇 |
排序方式: 共有626条查询结果,搜索用时 15 毫秒
81.
Akira Matsumoto Takayuki Nakagawa Masatoshi Sato Yasunori Kimura Kenji Nishida Atsuhiro Goto 《New Generation Computing》1991,9(2):149-169
The parallel inference machine (PIM) is now being developed at ICOT. It consists of a dozen or more clusters, each of which
is a tightly coupled multiprocessor (comprising about eight processing elements) with shared global memory and a common bus.
Kernel language 1 (KL1), a parallel logic programming language based on Guarded Horn Clauses (GHC), is executed on each PIM
cluster.
This paper describes the memory access characteristics in KL1 parallel execution and a locally parallel cache mechanism with
hardware lock. The most important issue of locally parallel cache design is how to reduce common bus traffic. A write-back
cache protocol having five cache states specially optimized for KL1 execution on each PIM cluster is described. We introduced
new software controlled memory access commands, named DW, ER, and RP. A hardware lock mechanism is attached to the cache on
each processor. This lock mechanism enables efficient word-by-word locking, reducing common bus traffic by using the cache
states. 相似文献
82.
83.
Distributed high-speed caching uses replication strategies to improve the parallel service efficiency in data-intensive services in cloud-based environments. This paper proposes a replication strategy based on the access pattern of tile in order to optimize load balancing for large-scale user access in cloud-based WebGISs. First, this strategy considers the access bias and repeatability involved in tile access, to cache tiles with higher popularities, and obtains a higher cache hit rate under limited distributed caches. Second, on the basis of the uneven distribution and temporal local changes of tile access, it generates hot tile replicas with a seat allocation scheme to efficiently allocate the distributed cache. Finally, it balances the load for tile access requests based on the spatial correlation and spatial locality in tile access patterns to achieve fast data extraction. Experimental results indicate that the proposed strategy can be adopted to handle numerous WebGISs tile requests in a cloud-based environment. 相似文献
84.
一种改进的监视Cache协议算法 总被引:2,自引:0,他引:2
本文探讨SMP体系结构的多处理环境下,几种常见Cache一致性问题的解决方案,讨论和对比了各自的特色,在此基础上进行一步分析了基中的不足,并提出我们的方案--改进的监视Cache协议算法。 相似文献
85.
86.
针对AES和CLEFIA的改进Cache踪迹驱动攻击 总被引:1,自引:0,他引:1
通过分析"Cache失效"踪迹信息和S盒在Cache中不对齐分布特性,提出了一种改进的AES和CLEFIA踪迹驱动攻击方法。现有攻击大都假定S盒在Cache中对齐分布,针对AES和CLEFIA的第1轮踪迹驱动攻击均不能在有限搜索复杂度内获取第1轮扩展密钥。研究表明,在大多数情况下,S盒在Cache中的分布是不对齐的,通过采集加密中的"Cache失效"踪迹信息,200和50个样本分别经AES第1轮和最后1轮分析可将128bit AES主密钥搜索空间降低到216和1,80个样本经CLEFIA第1轮分析可将128bit CLEFIA第1轮扩展密钥搜索空间降低到216,220个样本经前3轮分析可将128bit CLEFIA主密钥搜索空间降低到216,耗时不超过1s。 相似文献
87.
Jelle Feringa 《Architectural Design》2014,84(3):60-65
What do Odico Formwork Robotics, RoboFold, Machineous, ROB Technologies and GREYSHED share in common? They are all architectural robotics startups. Jelle Feringa , Chief Technology Officer at Odico, places the phenomenon of the architectural robotics entrepreneur in a historical and cultural context while highlighting the very practical role startups are poised to play in bridging the gap between academic research and industry, by providing the building industry with much needed new software tools. 相似文献
88.
The development of proxy caching is essential in the area of video‐on‐demand (VoD) to meet users' expectations. VoD requires high bandwidth and creates high traffic due to the nature of media. Many researchers have developed proxy caching models to reduce bandwidth consumption and traffic. Proxy caching keeps part of a media object to meet the viewing expectations of users without delay and provides interactive playback. If the caching is done continuously, the entire cache space will be exhausted at one stage. Hence, the proxy server must apply cache replacement policies to replace existing objects and allocate the cache space for the incoming objects. Researchers have developed many cache replacement policies by considering several parameters, such as recency, access frequency, cost of retrieval, and size of the object. In this paper, the Weighted‐Rank Cache replacement Policy (WRCP) is proposed. This policy uses such parameters as access frequency, aging, and mean access gap ratio and such functions as size and cost of retrieval. The WRCP applies our previously developed proxy caching model, Hot‐Point Proxy, at four levels of replacement, depending on the cache requirement. Simulation results show that the WRCP outperforms our earlier model, the Dual Cache Replacement Policy. 相似文献
89.
Linux系统在被不同大小的数据块访问时,系统读写性能有差异。在少数特定访问数据块大小的应用中,Linux系统读写性能较差。文件Cache算法的性能是导致该问题的原因之一。在分析访问数据块大小对文件Cache算法性能的影响的基础上,提出了一种文件Cache自适应策略。该策略考虑了预取算法对于页面置换算法的影响,增强了页面置换算法对访问数据块大小变化的适应性,达到了提高Linux系统读写性能的目标。Linux系统读写性能测试实验表明,该策略可以使Linux系统在被不同大小的数据块访问时保持稳定且更优的读写性能。 相似文献
90.
无线移动环境中缓存的主要目的是减少对无线带宽资源的占用和节省电池能量,然而移动无线终端的漫游与经常断开连接又给缓存内容的一致性带来了一系列新的问题。本文针对目前运行的GPRS网络,提出了在用户端(移动终端)和GPRS骨干网中添加验证服务器VS(Validation Server)对数据进行两级缓存的系统框架和缓存强一致性策略。该框架简化了无线移动环境下维护缓存一致性的复杂性,有效地降低了对无线带宽的占用和数据库服务器的负载,支持移动终端断开连接的时间任意和在一个公众陆地移动通信网PLMN网内的漫游,具有很强的实用性。 相似文献