共查询到20条相似文献,搜索用时 187 毫秒
1.
Docker镜像是Docker容器运行的基础,目前缺少完善的镜像安全检测方法,导致容器运行时易受到容器逃逸、拒绝服务攻击等各种安全威胁.为避免有毒镜像使用,本文提出一种Docker可信镜像源检测模型DTDIS(detect trusted Docker image source),该模型使用可信密码模块vTCM (virtual trusted cryptography module)构建镜像基准值数据库,检测本地镜像文件是否被篡改;使用父镜像漏洞数据库扩展Clair镜像扫描器避免重复扫描;结合文件度量信息、漏洞扫描信息判别Docker镜像源是否可信.经云环境下实验证明,该模型能够有效对Docker镜像进行安全评估,保证用户使用可信镜像. 相似文献
2.
3.
Docker在方便地实现系统快速部署的同时,引发了镜像内容安全问题。对此,设计一种内容安全的Docker镜像仓库SecDr,首先,对推送到仓库的Docker镜像文件逐层进行静态漏洞检查,确认镜像中安装的软件包是否包含已知的漏洞。其次运行镜像对容器进行渗透测试,尝试检测出开发人员在二次开发过程中引入的代码漏洞。运行结果表明,SecDr能够很好地发现已知公开的系统漏洞和研发引入的其他漏洞,解决了Docker镜像的内容安全问题。在企业级项目开发中,SecDr的功能得到有效验证。 相似文献
4.
5.
在容器虚拟化中,主流的镜像构建方式是通过Dockerfile来构建的.然而,在使用Dockerfile构建镜像时存在着明显的不足:由于Dockerfile语言的复杂性,使用文本编辑方式没有提供有效的语法引导,没有对Dockerfile可能存在的错误进行有效的检测,导致构建容器镜像工作效率低下.此外,使用说明不完整的第三方镜像,无法有效的确定镜像的功能和使用方法,安全性也是第三方镜像所面临的一大挑战,这带来了容器镜像的重复利用率低下的问题.针对上述问题,在分析Dockerfile语法和统计分析Dockerfile常见错误以及深入研究Docker镜像存储机制的基础上,设计了一个面向Dockerfile的镜像构建工具,并使用了可视化编辑,错误检测,逆向分析等关键技术进行实现.该工具能够在镜像构建中提供有效的语法引导,对Dockerfile常见的错误进行有效检测,为了验证第三方镜像的功能和安全性,设计了一种由Docker镜像逆向生成Dockerfile的方式,用户可以通过Dockerfile完全了解第三方镜像的功能和使用方式,另外通过二次构建的方式也可一定程度上解决第三方镜像的安全性问题. 相似文献
6.
7.
提出一种基于Docker的容器云平台设计与部署方法。分析容器云平台的用户需求,详细描述容器云平台的层次架构与组成模块。在容器云平台部署过程中,改进集群监控方案,结合Grafana和Prometheus构建一个可靠的性能监控方案。在镜像库上,为保证安全与稳定,选择Harbor作为私有镜像仓库。使用网络文件系统(Network File System,NFS)存储服务为容器提供共享存储服务。通过测试验证了该方法的可行性与高效性。 相似文献
8.
基于Docker的大规模日志采集与分析系统 总被引:1,自引:0,他引:1
传统日志分析技术在处理大规模日志时存在效率低、功能简单、实际应用扩展性弱等问题.为解决这些问题,设计了基于Docker的大规模日志采集与分析系统.系统分为数据采集、数据缓存、数据转发、数据存储、数据检索和展示五层,支持从不同数据源采集各种类型的日志,通过Kafka消息队列提供可靠数据传输,利用Elasticsearch实现数据分布式存储和检索,并以可视化方式分析日志.同时采用Docker容器技术实现系统的快速部署和版本控制.该系统具有实时性、可扩展性、易部署等特点.实验结果表明了该系统可行有效,具有良好的实用价值. 相似文献
9.
10.
在软件即服务的云传输模型中,托管中心根据需求在服务器上部署一个虚拟机镜像模板;镜像模板往往被维护于中央仓库中。由于托管中心物理上离散分布,而且互联网的带宽较低,所以从仓库中传输千兆字节规模的模板文件时存在较大延时;文章利用模板文件间存在的公共性,提出一种ETSIT-CO算法以确定补丁和模板文件在缓存中的分布,通过计算缓存的构成来实现从仓库开始的传输时间最小化,进而降低请求的响应时间;仿真实验结果表明,与标准的模板文件存储缓存技术相比,使用基于补丁的缓存策略实现了很大的性能提升;部署于文章原型测试床上时,基于ETSIT-CO的补丁缓存策略相对只缓存模板的ETSIT-CO选择策略,性能上升90%。 相似文献
11.
《Future Generation Computer Systems》2006,22(1-2):16-31
Proxy caches are essential to improve the performance of the World Wide Web and to enhance user perceived latency. Appropriate cache management strategies are crucial to achieve these goals. In our previous work, we have introduced Web object-based caching policies. A Web object consists of the main HTML page and all of its constituent embedded files. Our studies have shown that these policies improve proxy cache performance substantially.In this paper, we propose a new Web object-based policy to manage the storage system of a proxy cache. We propose two techniques to improve the storage system performance. The first technique is concerned with prefetching the related files belonging to a Web object, from the disk to main memory. This prefetching improves performance as most of the files can be provided from the main memory rather than from the proxy disk. The second technique stores the Web object members in contiguous disk blocks in order to reduce the disk access time. We used trace-driven simulations to study the performance improvements one can obtain with these two techniques. Our results show that the first technique by itself provides up to 50% reduction in hit latency, which is the delay involved in providing a hit document by the proxy. An additional 5% improvement can be obtained by incorporating the second technique. 相似文献
12.
《Data & Knowledge Engineering》2007,60(3):770-788
This paper provides a transparent and speculative algorithm for content based web page prefetching. The algorithm relies on a profile based on the Internet browsing habits of the user. It aims at reducing the perceived latency when the user requests a document by clicking on a hyperlink. The proposed user profile relies on the frequency of occurrence for selected elements forming the web pages visited by the user. These frequencies are employed in a mechanism for the prediction of the user’s future actions. For the anticipation of an adjacent action, the anchored text around each of the outbound links is used and weights are assigned to these links. Some of the linked documents are then prefetched and stored in a local cache according to the assigned weights. The proposed algorithm was tested against three different prefetching algorithms and yield improved cache–hit rates given a moderate bandwidth overhead. Furthermore, the precision of accurately inferring the user’s preference is evaluated through the recall–precision curves. Statistical evaluation testifies that the achieved recall–precision performance improvement is significant. 相似文献
13.
预取作为一种主动缓存技术可用于提高缓存命中率,但其效果好坏很大程度上取决于所采用的预取策略。本文提出了一种代理服务器下基于对象的限定预取策略,通过调整代理服务器中预取空间的大小,防止无效页面占用过多的缓存空间,提高了缓存的利用率,从而获得较高的命中率。实验表明,基于对象的限定预取策略命中率远远高于LRU策略,并且相对于基于对象的LRU策略也有明显的改善。 相似文献
14.
《Journal of Parallel and Distributed Computing》2000,60(5):585-615
In this paper we propose and evaluate a new data-prefetching technique for cache coherent multiprocessors. Prefetches are issued by a functional unit called a prefetch engine which is controlled by the compiler. We let second-level cache misses generate cache miss traps and start the prefetch engine in a trap handler. The trap handler is fast (40–50 cycles) and does not normally delay the program beyond the memory latency of the miss. Once started, the prefetch engine executes on its own and causes no instruction overhead. The only instruction overhead in our approach is when a trap handler completes after data arrives. The advantages of this technique are (1) it exploits static compiler analysis to determine what to prefetch, which is hard to do in hardware, (2) it uses prefetching with very little instruction overhead, which is a limitation for traditional software-controlled prefetching, and (3) it is accurate in the sense that it generates very little useless traffic while maintaining a high prefetching coverage. We also study whether one could emulate the prefetch engine in software, which would not require any additional hardware beyond support for generating cache miss traps and ordinary prefetch instructions. In this paper we present the functionality of the prefetch engine and a compiler algorithm to control it. We evaluate our technique on six parallel scientific and engineering applications using an optimizing compiler with our algorithm and a simulated multiprocessor. We find that the prefetch engine removes up to 67% of the memory access stall time at an instruction overhead less than 0.42%. The emulated prefetch engine removes in general less stall time at a higher instruction overhead. 相似文献
15.
This paper provides a transparent and speculative algorithm for content based web page prefetching. The algorithm relies on a profile based on the Internet browsing habits of the user. It aims at reducing the perceived latency when the user requests a document by clicking on a hyperlink. The proposed user profile relies on the frequency of occurrence for selected elements forming the web pages visited by the user. These frequencies are employed in a mechanism for the prediction of the user’s future actions. For the anticipation of an adjacent action, the anchored text around each of the outbound links is used and weights are assigned to these links. Some of the linked documents are then prefetched and stored in a local cache according to the assigned weights. The proposed algorithm was tested against three different prefetching algorithms and yield improved cache–hit rates given a moderate bandwidth overhead. Furthermore, the precision of accurately inferring the user’s preference is evaluated through the recall–precision curves. Statistical evaluation testifies that the achieved recall–precision performance improvement is significant. 相似文献
16.
核外计算中,由于I/O操作速度比较慢,所以对文件的访阿时间占的比例较大.如果使文件操作和计算重叠则可以大幅度地提高运行效率.软件数据预取是一种有效的隐藏存储延迟的技术,通过预取使数据在实际使用之前从硬盘读到缓存中,提高了缓存(cache)的命中率,降低了读取数据的时间.通过设置两个缓冲区来轮流存放本次和下一次读入的数据块,实现访存完全命中cache的效果,使Cholesky分解并行程序执行核外计算的效率得到了大幅度的提高.同时,I/O操作的时间与CPU的执行时间的比例也是影响效率的主要因素. 相似文献
17.
Chithra D. Gracia 《Applied Artificial Intelligence》2016,30(5):475-493
The web resources in the World Wide Web are rising, to large extent due to the services and applications provided by it. Because web traffic is large, gaining access to these resources incurs user-perceived latency. Although the latency can never be avoided, it can be minimized to a larger extent. Web prefetching is identified as a technique that anticipates the user’s future requests and fetches them into the cache prior to an explicit request made. Because web objects are of various types, a new algorithm is proposed that concentrates on prefetching embedded objects, including audio and video files. Further, clustering is employed using adaptive resonance theory (ART)2 in order to prefetch embedded objects as clusters. For comparative study, the web objects are clustered using ART2, ART1, and other statistical techniques. The clustering results confirm the supremacy of ART2 and, thereby, prefetching web objects in clusters is observed to produce a high hit rate. 相似文献
18.
缓存和预取在提高无线环境下的Web访问性能方面发挥着重要作用。文章研究针对无线局域网的Web缓存和预取机制,分别基于数据挖掘和信息论提出了采用序列挖掘和延迟更新的预测算法,设计了上下文感知的预取算法和获益驱动的缓存替换机制,上述算法已在Web缓存系统OnceEasyCache中实现。性能评估实验结果表明,上述算法的集成能有效地提高缓存命中率和延迟节省率。 相似文献
19.
20.
基于网络性能的智能Web加速技术——缓存与预取 总被引:8,自引:0,他引:8
Web业务在网络业务中占有很大比重,在无法扩大网络带宽时,需要采取一定技术合理利用带宽,改善网络性能。研究了基于RTT(round trip time)等网络性能指标的Web智能加速技术,在对Web代理服务器上的业务进行分析和对网络RTT进行测量分析的基础上,提出了智能预取控制技术及新的缓存(cache)替换方法。对新算法的仿真研究表明,该方法提高了缓存的命中率。研究表明预取技术在不明显增加网络负荷的前提下,提高了业务的响应速度,有效地改进了Web访问性能。 相似文献