首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
Docker镜像是Docker容器运行的基础,目前缺少完善的镜像安全检测方法,导致容器运行时易受到容器逃逸、拒绝服务攻击等各种安全威胁.为避免有毒镜像使用,本文提出一种Docker可信镜像源检测模型DTDIS(detect trusted Docker image source),该模型使用可信密码模块vTCM (virtual trusted cryptography module)构建镜像基准值数据库,检测本地镜像文件是否被篡改;使用父镜像漏洞数据库扩展Clair镜像扫描器避免重复扫描;结合文件度量信息、漏洞扫描信息判别Docker镜像源是否可信.经云环境下实验证明,该模型能够有效对Docker镜像进行安全评估,保证用户使用可信镜像.  相似文献   

2.
《软件》2017,(5):59-63
Open Stack已成为了云计算中基础设施即服务(Infrastructure as a Service,简称Iaas)标准。Docker是基于Linux容器为基础构建的容器引擎,通过命名空间和资源划分实现资源隔离和调配,使用分层存储构建镜像,实现了将操作系统和应用捆绑的方法,使得应用系统环境标准化、集装箱化传递成为现实。文章探究了Open Stack、Docker的融合,分析了OPen Stack融合Docker的三种方案,为Open Stack的部署、Docker技术的应用提供了参考。  相似文献   

3.
Docker在方便地实现系统快速部署的同时,引发了镜像内容安全问题。对此,设计一种内容安全的Docker镜像仓库SecDr,首先,对推送到仓库的Docker镜像文件逐层进行静态漏洞检查,确认镜像中安装的软件包是否包含已知的漏洞。其次运行镜像对容器进行渗透测试,尝试检测出开发人员在二次开发过程中引入的代码漏洞。运行结果表明,SecDr能够很好地发现已知公开的系统漏洞和研发引入的其他漏洞,解决了Docker镜像的内容安全问题。在企业级项目开发中,SecDr的功能得到有效验证。  相似文献   

4.
为满足基于Docker容器的应用在Kubernetes集群和Docker Swarm集群之间的迁移需求,研究A-Migrator异构容器云应用迁移系统。通过Kubernetes和Docker Swarm的应用编排调度策略,给出两者之间编排信息转换的可行方法,并提出基于镜像预同步的应用迁移技术。实验结果表明,A-Migrator异构容器云应用迁移系统可实现基于Docker容器的应用在2个集群之间的迁移,且引入镜像预同步技术后应用迁移时间平均减少60.33%。  相似文献   

5.
在容器虚拟化中,主流的镜像构建方式是通过Dockerfile来构建的.然而,在使用Dockerfile构建镜像时存在着明显的不足:由于Dockerfile语言的复杂性,使用文本编辑方式没有提供有效的语法引导,没有对Dockerfile可能存在的错误进行有效的检测,导致构建容器镜像工作效率低下.此外,使用说明不完整的第三方镜像,无法有效的确定镜像的功能和使用方法,安全性也是第三方镜像所面临的一大挑战,这带来了容器镜像的重复利用率低下的问题.针对上述问题,在分析Dockerfile语法和统计分析Dockerfile常见错误以及深入研究Docker镜像存储机制的基础上,设计了一个面向Dockerfile的镜像构建工具,并使用了可视化编辑,错误检测,逆向分析等关键技术进行实现.该工具能够在镜像构建中提供有效的语法引导,对Dockerfile常见的错误进行有效检测,为了验证第三方镜像的功能和安全性,设计了一种由Docker镜像逆向生成Dockerfile的方式,用户可以通过Dockerfile完全了解第三方镜像的功能和使用方式,另外通过二次构建的方式也可一定程度上解决第三方镜像的安全性问题.  相似文献   

6.
《软件》2016,(3):110-113
Docker是一个构建在LXC之上的,基于进程容器(Processcontainer)的轻量级VM解决方案。从2013年Docker项目的出现至今,在容器技术中,Docker已经得到迅速的发展和应用[1]。本文针对Docker的技术优势和架构体系,分析了Docker镜像的特点以及Docker容器的创建技巧,并指出了Docker在当前软件开发应用中的重要性。  相似文献   

7.
提出一种基于Docker的容器云平台设计与部署方法。分析容器云平台的用户需求,详细描述容器云平台的层次架构与组成模块。在容器云平台部署过程中,改进集群监控方案,结合Grafana和Prometheus构建一个可靠的性能监控方案。在镜像库上,为保证安全与稳定,选择Harbor作为私有镜像仓库。使用网络文件系统(Network File System,NFS)存储服务为容器提供共享存储服务。通过测试验证了该方法的可行性与高效性。  相似文献   

8.
基于Docker的大规模日志采集与分析系统   总被引:1,自引:0,他引:1  
传统日志分析技术在处理大规模日志时存在效率低、功能简单、实际应用扩展性弱等问题.为解决这些问题,设计了基于Docker的大规模日志采集与分析系统.系统分为数据采集、数据缓存、数据转发、数据存储、数据检索和展示五层,支持从不同数据源采集各种类型的日志,通过Kafka消息队列提供可靠数据传输,利用Elasticsearch实现数据分布式存储和检索,并以可视化方式分析日志.同时采用Docker容器技术实现系统的快速部署和版本控制.该系统具有实时性、可扩展性、易部署等特点.实验结果表明了该系统可行有效,具有良好的实用价值.  相似文献   

9.
柔性网络化数据存储中心   总被引:2,自引:0,他引:2       下载免费PDF全文
网络存储技术的应用使存储系统和存储结构发生了显著的变化。柔性网络化数据存储中心(数据中心)采用基于IP的存储技术,使主机可以绕过服务器直接访问存储设备。该文分析了数据中心的体系结构,实验结果表明,数据中心相对于服务器存储,其性能有了大幅提高。应用于数据中心的“网络化光盘库”丰富了数据中心的存储层次,大容量磁盘缓存和光盘镜像技术提高了光盘库的性能。基于生物免疫思想的安全机制能有效地阻止对数据中心存储节点的异常访问和操作。  相似文献   

10.
  魏长宝  李健 《计算机测量与控制》2014,22(11):3812-3815
在软件即服务的云传输模型中,托管中心根据需求在服务器上部署一个虚拟机镜像模板;镜像模板往往被维护于中央仓库中。由于托管中心物理上离散分布,而且互联网的带宽较低,所以从仓库中传输千兆字节规模的模板文件时存在较大延时;文章利用模板文件间存在的公共性,提出一种ETSIT-CO算法以确定补丁和模板文件在缓存中的分布,通过计算缓存的构成来实现从仓库开始的传输时间最小化,进而降低请求的响应时间;仿真实验结果表明,与标准的模板文件存储缓存技术相比,使用基于补丁的缓存策略实现了很大的性能提升;部署于文章原型测试床上时,基于ETSIT-CO的补丁缓存策略相对只缓存模板的ETSIT-CO选择策略,性能上升90%。  相似文献   

11.
Proxy caches are essential to improve the performance of the World Wide Web and to enhance user perceived latency. Appropriate cache management strategies are crucial to achieve these goals. In our previous work, we have introduced Web object-based caching policies. A Web object consists of the main HTML page and all of its constituent embedded files. Our studies have shown that these policies improve proxy cache performance substantially.In this paper, we propose a new Web object-based policy to manage the storage system of a proxy cache. We propose two techniques to improve the storage system performance. The first technique is concerned with prefetching the related files belonging to a Web object, from the disk to main memory. This prefetching improves performance as most of the files can be provided from the main memory rather than from the proxy disk. The second technique stores the Web object members in contiguous disk blocks in order to reduce the disk access time. We used trace-driven simulations to study the performance improvements one can obtain with these two techniques. Our results show that the first technique by itself provides up to 50% reduction in hit latency, which is the delay involved in providing a hit document by the proxy. An additional 5% improvement can be obtained by incorporating the second technique.  相似文献   

12.
This paper provides a transparent and speculative algorithm for content based web page prefetching. The algorithm relies on a profile based on the Internet browsing habits of the user. It aims at reducing the perceived latency when the user requests a document by clicking on a hyperlink. The proposed user profile relies on the frequency of occurrence for selected elements forming the web pages visited by the user. These frequencies are employed in a mechanism for the prediction of the user’s future actions. For the anticipation of an adjacent action, the anchored text around each of the outbound links is used and weights are assigned to these links. Some of the linked documents are then prefetched and stored in a local cache according to the assigned weights. The proposed algorithm was tested against three different prefetching algorithms and yield improved cache–hit rates given a moderate bandwidth overhead. Furthermore, the precision of accurately inferring the user’s preference is evaluated through the recall–precision curves. Statistical evaluation testifies that the achieved recall–precision performance improvement is significant.  相似文献   

13.
预取作为一种主动缓存技术可用于提高缓存命中率,但其效果好坏很大程度上取决于所采用的预取策略。本文提出了一种代理服务器下基于对象的限定预取策略,通过调整代理服务器中预取空间的大小,防止无效页面占用过多的缓存空间,提高了缓存的利用率,从而获得较高的命中率。实验表明,基于对象的限定预取策略命中率远远高于LRU策略,并且相对于基于对象的LRU策略也有明显的改善。  相似文献   

14.
In this paper we propose and evaluate a new data-prefetching technique for cache coherent multiprocessors. Prefetches are issued by a functional unit called a prefetch engine which is controlled by the compiler. We let second-level cache misses generate cache miss traps and start the prefetch engine in a trap handler. The trap handler is fast (40–50 cycles) and does not normally delay the program beyond the memory latency of the miss. Once started, the prefetch engine executes on its own and causes no instruction overhead. The only instruction overhead in our approach is when a trap handler completes after data arrives. The advantages of this technique are (1) it exploits static compiler analysis to determine what to prefetch, which is hard to do in hardware, (2) it uses prefetching with very little instruction overhead, which is a limitation for traditional software-controlled prefetching, and (3) it is accurate in the sense that it generates very little useless traffic while maintaining a high prefetching coverage. We also study whether one could emulate the prefetch engine in software, which would not require any additional hardware beyond support for generating cache miss traps and ordinary prefetch instructions. In this paper we present the functionality of the prefetch engine and a compiler algorithm to control it. We evaluate our technique on six parallel scientific and engineering applications using an optimizing compiler with our algorithm and a simulated multiprocessor. We find that the prefetch engine removes up to 67% of the memory access stall time at an instruction overhead less than 0.42%. The emulated prefetch engine removes in general less stall time at a higher instruction overhead.  相似文献   

15.
This paper provides a transparent and speculative algorithm for content based web page prefetching. The algorithm relies on a profile based on the Internet browsing habits of the user. It aims at reducing the perceived latency when the user requests a document by clicking on a hyperlink. The proposed user profile relies on the frequency of occurrence for selected elements forming the web pages visited by the user. These frequencies are employed in a mechanism for the prediction of the user’s future actions. For the anticipation of an adjacent action, the anchored text around each of the outbound links is used and weights are assigned to these links. Some of the linked documents are then prefetched and stored in a local cache according to the assigned weights. The proposed algorithm was tested against three different prefetching algorithms and yield improved cache–hit rates given a moderate bandwidth overhead. Furthermore, the precision of accurately inferring the user’s preference is evaluated through the recall–precision curves. Statistical evaluation testifies that the achieved recall–precision performance improvement is significant.  相似文献   

16.
核外计算中,由于I/O操作速度比较慢,所以对文件的访阿时间占的比例较大.如果使文件操作和计算重叠则可以大幅度地提高运行效率.软件数据预取是一种有效的隐藏存储延迟的技术,通过预取使数据在实际使用之前从硬盘读到缓存中,提高了缓存(cache)的命中率,降低了读取数据的时间.通过设置两个缓冲区来轮流存放本次和下一次读入的数据块,实现访存完全命中cache的效果,使Cholesky分解并行程序执行核外计算的效率得到了大幅度的提高.同时,I/O操作的时间与CPU的执行时间的比例也是影响效率的主要因素.  相似文献   

17.
The web resources in the World Wide Web are rising, to large extent due to the services and applications provided by it. Because web traffic is large, gaining access to these resources incurs user-perceived latency. Although the latency can never be avoided, it can be minimized to a larger extent. Web prefetching is identified as a technique that anticipates the user’s future requests and fetches them into the cache prior to an explicit request made. Because web objects are of various types, a new algorithm is proposed that concentrates on prefetching embedded objects, including audio and video files. Further, clustering is employed using adaptive resonance theory (ART)2 in order to prefetch embedded objects as clusters. For comparative study, the web objects are clustered using ART2, ART1, and other statistical techniques. The clustering results confirm the supremacy of ART2 and, thereby, prefetching web objects in clusters is observed to produce a high hit rate.  相似文献   

18.
缓存和预取在提高无线环境下的Web访问性能方面发挥着重要作用。文章研究针对无线局域网的Web缓存和预取机制,分别基于数据挖掘和信息论提出了采用序列挖掘和延迟更新的预测算法,设计了上下文感知的预取算法和获益驱动的缓存替换机制,上述算法已在Web缓存系统OnceEasyCache中实现。性能评估实验结果表明,上述算法的集成能有效地提高缓存命中率和延迟节省率。  相似文献   

19.
移动环境下的低开销自动数据收集算法   总被引:6,自引:0,他引:6  
周桓  李京  冯玉琳 《软件学报》2002,13(10):1962-1968
移动计算的一个重要难题是断连操作.数据收集过程是指在断连前把用户将来可能访问的数据预先存储到本地缓存的过程.收集过程的结果将明显影响断连操作的性能.提出了一种低开销的通用数据收集算法,在访问数据时同步建立数据之间的关联,并在数据关联的基础上自动选择要收集的数据集.模拟试验结果表明,该算法可以有效地提高断连操作时的cache 命中率,尤其适用于计算速度慢、存储容量小的手持移动设备.  相似文献   

20.
基于网络性能的智能Web加速技术——缓存与预取   总被引:8,自引:0,他引:8  
Web业务在网络业务中占有很大比重,在无法扩大网络带宽时,需要采取一定技术合理利用带宽,改善网络性能。研究了基于RTT(round trip time)等网络性能指标的Web智能加速技术,在对Web代理服务器上的业务进行分析和对网络RTT进行测量分析的基础上,提出了智能预取控制技术及新的缓存(cache)替换方法。对新算法的仿真研究表明,该方法提高了缓存的命中率。研究表明预取技术在不明显增加网络负荷的前提下,提高了业务的响应速度,有效地改进了Web访问性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号