首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3篇
  完全免费   5篇
  自动化技术   8篇
  2016年   1篇
  2013年   1篇
  2012年   4篇
  2009年   1篇
  2006年   1篇
排序方式: 共有8条查询结果,搜索用时 46 毫秒
1
1.
An SPN-Based Integrated Model for Web Prefetching and Caching   总被引:16,自引:0,他引:16       下载免费PDF全文
The World Wide Web has become the primary means for information dissemination. Due to the limited resources of the network bandwidth, users always suffer from long time waiting. Web prefetching and web caching are the primary approaches to reducing the user perceived access latency and improving the quality of services. In this paper, a Stochastic Petri Nets (SPN) based integrated web prefetching and caching model (IWPCM) is presented and the performance evaluation of IWPCM is made. The performance metrics, access latency, throughput, HR (hit ratio) and BHR (byte hit ratio) are analyzed and discussed. Simulations show that compared with caching only model (CM), IWPCM can further improve the throughput, HR and BHR efficiently and reduce the access latency. The performance evaluation based on the SPN model can provide a basis for implementation of web prefetching and caching and the combination of web prefetching and caching holds the promise of improving the QoS of web systems.  相似文献
2.
一种对等网中基于相互信任的两层信任模型   总被引:3,自引:1,他引:2       下载免费PDF全文
在P2P 系统中,声誉模型是建立节点间信任关系的重要方法之一,但现有的P2P 声誉模型几乎都是纯分散式的,具有信任收敛慢、信任管理复杂和网络开销大等缺点.在TLT(two-level trust)中,节点自发组织为信任簇,信任评价以簇为单位.每个簇由簇首和成员节点组成,簇首和成员节点之间是一种相互信任的关系:簇首为了提高自身的簇间服务信任,利用簇内服务信任观察成员节点的服务性能,过滤恶意的成员节点;成员节点为了提高服务声誉和接受更好的服务,利用代理信任考察簇首的管理能力.分析和仿真结果表明:在TLT 中,节点的信任值收敛快,恶意行为能够被快速识别;TLT 可扩展性好,如信任管理简单和网络开销小.  相似文献
3.
Helper threaded prefetching based on chip multiprocessor has been shown to reduce memory latency and improve overall system performance, and has been explored in linked data structures accesses. In our earlier work, we had proposed an effective threaded prefetching technique that balances delinquent loads between main thread and helper thread to improve effectiveness of prefetching. In this paper, we analyze memory access characteristic of specific application to estimate effective prefetch distance range for our proposed threaded prefetching technique. The effect of hardware prefetchers on the estimation is also exploited. We discuss key design issues of our proposed method and present preliminary experimental results. Our experimental evaluations indicated that the bounded range of effective prefetch distance can be determined using our method, and the optimal prefetch distances can be determined based on the estimated effective prefetch distance range by few trial runs.  相似文献
4.
Helper threaded prefetching based on Chip Multiprocessor is a well known approach to reducing memory latency and has been explored in linked data structures accesses. However, conventional helper threaded prefetching often suffers from useless prefetches and cache thrashing, which affect its effectiveness. In this paper, we first analyzed the shortcomings of conventional helper threaded prefetching for linked data structures. Then we proposed an improved helper threaded prefetching, Skip Helper Threaded Prefetching, for hotspots with two level data traversals. Our solution is to profile the applications and balance delinquent loads between main thread and prefetching thread based on the characteristic of operations in their hotspots. Evaluations show that the proposed solution improves average performance by 8.9% (-O2) and 8.5% (-O3) over the conventional helper threaded prefetching that greedily prefetches all delinquent loads. We also compare our proposal with the active threaded prefetching which synchronizes with main thread by semaphore, and find that our proposal provides better performance for the targeted applications.  相似文献
5.
Nowadays,we are heading towards integrating hundreds to thousands of cores on a single chip.However,traditional system software and middleware are not well suited to manage and provide services at such large scale.To improve the scalability and adaptability of operating system and middleware services on future many-core platform,we propose the pinned OS/services.By porting each OS and runtime system(middleware) service to a separate core(special hardware acceleration),we expect to achieve maximal performance gain and energy efficiency in many-core environments.As a case study,we target on XML(Extensible Markup Language),the commonly used data transfer/store standard in the world.We have successfully implemented and evaluated the design of porting XML parsing service onto Intel 48-core Single-Chip Cloud Computer(SCC) platform.The results show that it can provide considerable energy saving.However,we also identified heavy performance penalties introduced from memory side,making the parsing service bloated.Hence,as a further step,we propose the memory-side hardware accelerator for XML parsing.With specified hardware design,we can further enhance the performance gain and energy efficiency,where the performance can be improved by 20% with 12.27% energy reduction.  相似文献
6.
为获得准确的单个车牌字符,提出了一种将连通域法和垂直投影法相结合的车牌字符分割算法。该方法首先根据车牌中每个字母或数字的连通性特点,按照一定的像素准则与相邻像素之间的连通关系扫描目标像素的相邻像素点,找出各自的连通区域;然后对车牌进行垂直投影找出汉字区域,精确地分割出单个字符。实验结果表明,该算法对车牌字符分割的准确性和处理速度都有较大提高,能满足系统实时性和准确性的要求。  相似文献
7.
研究了有限域F2上有随机噪声的一组多项式的近似最大公因式问题,提出了基于近似最大公因多项式问题的公钥密码方案。证明了方案的正确性并归约证明了方案的安全性等价于求解近似最大公因式问题,同时讨论了对于该方案可能的攻击方式。通过与现有公钥系统比较,该方案的安全性和可靠性较高,运算速度较快。  相似文献
8.
Nowadays, inter-task interferences are the main difficulty in analyzing the timing behavior of multicores. The timing predictable embedded multicore architecture MERASA, which allows safe worst-case execution time (WCET) estimations, has emerged as an attractive solution. In the architecture, WCET can be estimated by the upper bound delay (UBD) which can be bounded by the interference-aware bus arbiter (IABA) and the dynamic cache partitioning such as columnization or bankization. However, this architecture faces a dilemma between decreasing UBD and efficient shared cache utilization. To obtain tighter WCET estimation, we propose a novel approach that reduces UBD by optimizing bank-to-core mapping on the multicore system with IABA and the two-level partitioned cache. For this, we first present a new UBD computation model based on the analysis of inter-task interference delay, and then put forward the core-sequence optimization method of bank-to-core mapping and the optimizing algorithms with the minimum UBD. Experimental results demonstrate that our approach can reduce WCET from 4% to 37%.  相似文献
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号