首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   40篇
  免费   0篇
  国内免费   2篇
无线电   1篇
自动化技术   41篇
  2017年   1篇
  2016年   1篇
  2015年   3篇
  2014年   1篇
  2013年   1篇
  2012年   2篇
  2011年   3篇
  2010年   1篇
  2008年   3篇
  2007年   3篇
  2006年   3篇
  2005年   2篇
  2004年   6篇
  2003年   4篇
  2000年   1篇
  1999年   2篇
  1998年   1篇
  1996年   1篇
  1995年   1篇
  1982年   1篇
  1981年   1篇
排序方式: 共有42条查询结果,搜索用时 31 毫秒
1.
游小容  曹晟 《计算机科学》2015,42(10):76-80
Hadoop作为成熟的分布式云平台,能提供可靠高效的存储服务,常用来解决大文件的存储问题,但在处理海量小文件时效率显著降低。提出了基于Hadoop的海量教育资源中小文件的存储优化方案,即利用教育资源小文件间的关联关系,将小文件合并成大文件以减少文件数量,并用索引机制访问小文件及元数据缓存和关联小文件预取机制来提高文件的读取效率。实验证明,以上方法提高了Hadoop文件系统对小文件的存取效率。  相似文献   
2.
In recent years, cluster computing has been widely investigated and there is no doubt that it can provide a cost-effective computing infrastructure by aggregating computational power, communication, and storage resources. Moreover, it is also considered to be a very attractive platform for low-cost supercomputing. Distributed shared memory (DSM) systems utilize the physical memory of each computing node interconnected in a private network to form a global virtual shared memory. Since this global shared memory is distributed among the computing nodes, accessing the data located in remote computing nodes is an absolute necessity. However, this action will result in significant remote memory access latencies which are major sources of overhead in DSM systems. For these reasons, in order to increase overall system performance and decrease this overhead, a number of strategies have been devised. Prefetching is one such approach which can reduce latencies, although it always increases the workload in the home nodes. In this paper, we propose a scheme named Agent Home Scheme. Its most noticeable feature, when compared to other schemes, is that the agent home distributes the workloads of each computing nodes when sending data. By doing this, we can reduce not only the workload of the home nodes by balancing the workload for each node, but also the waiting time. Experimental results show that the proposed method can obtain about 20% higher performance than the original JIAJIA, about 18% more than History Prefetching Strategy (HPS), and about 10% higher than Effective Prefetch Strategy (EPS).  相似文献   
3.
World Wide Web(WWW) services have grown to levels where significant delays are expected to happen.Technology like prefetching are likely to help users to personalize their needs ,reducing their waiting times. This paperfirstly describes the architecture of prefetching,then classifies them into three types: based on branch model,based ontree model and others and presents profoundly the basic ideas of some existing prefetching algorithms. Next,several models for controlling the prefetching are introduced. At last,the trend and course concerning the prefetching algo-rithms are concluded.  相似文献   
4.
The file system, and the components of the computer system associated with it (disks, drums, channels, mass storage, tapes and tape drives, controllers, I/O drivers, etc.) comprise a very substantial fraction of most computer systems; substantial in several aspects including amount of operating system code, expense for components, physical size and effect on performance. In this paper we survey the state of the art file and I/O system design and optimization as it applies to large data processing installations. In a companion paper, some research results applicable to both current and future system designs are summarized.Among the topics we discuss is the optimization of current file systems, where some material is provided regarding block size choice, data set placement, disk arm scheduling, rotational scheduling, compaction, fragmentation, I/O multipathing and file data structures. A set of references to the literature, especially to analytic I/O system models, is presented. The general tuning of file and I/O systems is also considered. Current and forthcoming disk architectures are the second topic. The count key data architecture of current disks (e.g. IBM 3350, 3380) and the fixed block architecture of new products (IBM 3310, 3370) are compared. The use of semiconductor drum replacements is considered and some commercially available systems are briefly described.  相似文献   
5.
In this paper, we explore two techniques for reducing memory latency in bus-based multiprocessors. The first one, designed for sector caches, is a snoopy cache coherence protocol that uses a large transfer block to take advantage of spatial locality, while using a small coherence block (called a subblock) to avoid false sharing. The second technique is read snarfing (or read broadcasting), in which all caches can acquire data transmitted in response to a read request to update invalid blocks in their own cache.

We evaluated the two techniques by simulating 6 applications that exhibit a variety of reference patterns. We compared the performance of the new protocol against that of the Illinois protocol with both small and large block sizes and found that it was effective in reducing memory latency and providing more consistent, good results than the Illinois protocol with a given line size. Read snarfing also improved performance mostly for protocols that use large line sizes.  相似文献   

6.
Prefetching is a technique applied to memory management policies in which pages are brought into memory before they are actually needed. In this study, prior knowledge of program behavior obtained through trace data is used to parameterize a variation of the working set memory management policy supporting demand page prefetching using one page lookahead. Two new algorithms supporting double page prefetching are proposed. A comparative analysis of them is made.Evaluation of the techniques through trace driven simulation shows that, in general, they are very effective in reducing the page fault rate and in some cases the space time product. As expected, memory occupancy increases, but usually by small amounts. The results are encouraging for their ability to improve performance for a variety of programs which exhibit some form of locality (spatial and/or temporal).  相似文献   
7.
流媒体服务系统中一种基于数据预取的缓存策略   总被引:1,自引:0,他引:1  
具有 VCR 功能的流媒体服务系统由于请求的随机性会影响用户的点播体验,该文结合数据预取机制以及基于分段的缓存策略计算出用户点播延迟的期望,给出一个较优的缓存管理策略,并通过在线计算逼近最优解,同时在缓存已知的情况下,给出相应的数据预取算法,利用缓存和预取两种数据获取方法的相互协作减小客户端点播延迟,提高缓存效率。仿真结果证实了所提算法的有效性。  相似文献   
8.
提出了一种结合用户访问序列和Web页面内容的数据挖掘方法来形成预取技术。该技术充分考虑了Web页面的内容语义和内容大小,既可以在一定程度上提高命中率,又可以减少服务器的负载。  相似文献   
9.
一种智能的预取算法   总被引:1,自引:0,他引:1  
网络延迟问题是用户QoS的主要问题之一,它依赖诸多因素如网络带宽、传输延迟、排队延迟和客户机及服务器的处理速度。目前主要采用缓存和预取技术来减少网络延迟,但缓存技术所提高的缓存代理服务器的命中率是有限的。该文系统地阐述了目前预取算法的基本思想并把它们分成四类:基于流行度、基于交互、基于访问概率和基于数据挖掘的预取算法。在对它们进行分析比较的基础上,提出了一种智能的预取方案。该方案使用模糊匹配来计算用户对页面的访问概率,同时要控制预取的量和预取的时刻,以避免对网络的性能产生负面影响。  相似文献   
10.
Multiple memory models have been proposed to capture the effects of memory hierarchy culminating in the I-O model of Aggarwal and Vitter (Commun. ACM 31(9):1116–1127, [1988]). More than a decade of architectural advancements have led to new features that are not captured in the I-O model—most notably the prefetching capability. We propose a relatively simple Prefetch model that incorporates data prefetching in the traditional I-O models and show how to design optimal algorithms that can attain close to peak memory bandwidth. Unlike (the inverse of) memory latency, the memory bandwidth is much closer to the processing speed, thereby, intelligent use of prefetching can considerably mitigate the I-O bottleneck. For some fundamental problems, our algorithms attain running times approaching that of the idealized random access machines under reasonable assumptions. Our work also explains more precisely the significantly superior performance of the I-O efficient algorithms in systems that support prefetching compared to ones that do not.
Sandeep SenEmail:
  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号