排序方式: 共有98条查询结果,搜索用时 18 毫秒
61.
大数据时代到来,备份数据量增大给存储空间带来新的挑战。重复数据删除技术在备份存储系统中正逐渐流行,但大量数据访问,造成了磁盘的很大负担。针对重复数据删除技术存在的块索引查询磁盘瓶颈问题,文中提出了文件相似性与数据流局部性结合方法改善磁盘I/O性能。该方法充分发挥了各自的优势,相似性优化了索引查找,可以检测到相同数据检测技术不能识别的重复数据;而数据局部性保留了数据流的序列,使得cache的命中率提高,减少磁盘访问次数。布鲁过滤器存储数据块索引可节省大量查询时间和空间开销。对于提出的解决方法所涉及的重要参数如块大小、段大小以及对误判率的影响做了深入分析。通过相关实验评估与性能分析,实验数据与结果为进一步系统性能优化问题提供了重要的数据依据。 相似文献
62.
为解决基于闪存的固态盘寿命有限及随容量被消耗可靠性也逐渐衰减的特性,基于两个重要的观察结果:a)存在大量的重复数据块;b)元数据块比数据块被更加频繁地访问或修改,但每次更新仅有微小变化,提出了flash saver,耦合EXT2/3文件系统,结合重复数据删除和增量编码来减少到SSD的写流量.Flash saver将重复数据删除用于文件系统数据块,将增量编码用于文件系统元数据块.实验结果表明,flash saver可减少高达63%的总写流量,从而使其具有更长的使用寿命、更大的有效flash空间和更高的可靠性. 相似文献
63.
Cloud computing enables on-demand and ubiquitous access to a centralized pool of configurable resources such as networks, applications, and services. This makes that huge of enterprises and individual users outsource their data into the cloud server. As a result, the data volume in the cloud server is growing extremely fast. How to efficiently manage the ever-increasing datum is a new security challenge in cloud computing. Recently, secure deduplication techniques have attracted considerable interests in the both academic and industrial communities. It can not only provide the optimal usage of the storage and network bandwidth resources of cloud storage providers, but also reduce the storage cost of users. Although convergent encryption has been extensively adopted for secure deduplication, it inevitably suffers from the off-line brute-force dictionary attacks since the message usually can be predictable in practice. In order to address the above weakness, the notion of DupLESS was proposed in which the user can generate the convergent key with the help of a key server. We argue that the DupLESS does not work when the key server is corrupted by the cloud server. In this paper, we propose a new multi-server-aided deduplication scheme based on the threshold blind signature, which can effectively resist the collusion attack between the cloud server and multiple key servers. Furthermore, we prove that our construction can achieve the desired security properties. 相似文献
64.
尹勤勤 《计算机工程与应用》2018,54(10):73-80
针对现有云存储系统中数据去重采用的收敛加密算法容易遭到暴力破解以及猜测攻击等不足,提出一种基于布隆过滤器的混合云存储安全去重方案BFHDedup,改进现有混合云存储系统模型,私有云部署密钥服务器Key Server支持布隆过滤器认证用户的权限身份,实现了用户的细粒度访问控制。同时使用双层加密机制,在传统收敛加密算法基础上增加额外的加密算法并且将文件级别去重和块级别去重相结合实现细粒度去重。此外,BFHDedup采用密钥加密链机制应对去重带来的密钥管理难题。安全性分析及仿真实验结果表明,该方案在可容忍的时间开销代价下实现了较高的数据机密性,有效抵抗暴力破解以及猜测攻击,提高了去重比率并且减少了存储空间。 相似文献
65.
Adam Richard Lai Nguyen Peter Shipton Kenneth B. Kent Azden Bierbrauer Konstantin Nasartschuk Marcel Dombrowski 《Software》2016,46(9):1285-1296
Some Java programs lend themselves to being run many times and create the same fixed objects every time. Many of these common objects are Strings . To exploit this trend, we have modified IBM's J9 Java virtual machine (JVM) to allow the same String objects to share (reuse) their internal char[] (character) arrays in each JVM. The first instance of the Java program runs to completion and then sets up the Strings for sharing, so that subsequent instances of the same program can use the char[] arrays that it created instead of recreating them. String sharing will not provide benefit in all applications, but for those that fit the pattern, as exemplified by the Eclipse and H2 benchmarks, we were able to achieve significant heap saving with negligible impact on performance. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
66.
Although deduplication can reduce data volume for backup, it pauses the running system for the purpose of data consistency. This problem becomes severe when the target data are Virtual Machine Image (VMI), the volume of which can scale up to several gigabytes. In this paper, we propose an online framework for VM image backup and recovery, called VMBackup, which comprises three major components: (1) Similarity Retrieval that indexes chunks' fingerprints by its segment id for fast identification, (2) one‐level File‐Index that efficiently tracks file id to its content chunks in a correct order, and (3) Adjacent Storage model that places adjacent chunks of an image in the same disk partition to maximize chunk locality. The experimental results show that (1) the images of one OS serial and the same custom can share high percentage of duplicated contents, (2) variable‐length chunk partitioning is superior to fixed‐length chunk partitioning for deduplication, and (3) VMBackup, in our environment, can provide 8M/s backup throughput and 9.5M/s recovery throughput, which are only 15% and 4% less than storage systems without deduplication. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
67.
数据流行度去重方案中存在检测机构不诚实、数据存储不可靠等问题,提出一种面向去中心化存储的数据流行度去重模型。针对检测机构不诚实,模型结合区块链的不可篡改性与智能合约的不可抵赖性,将智能合约作为检测机构执行数据的重复性检测和流行度检测,保障了检测结果的真实性。针对数据存储不可靠问题,提出一种文件链存储结构,该结构满足数据流行度去重的要求,并通过添加辅助信息的方式,建立分布在不同存储节点中实现物理/逻辑上传的分片之间的逻辑关系,为流行度数据去中心化网络存储提供基础;同时,在数据块信息中添加备份标识,借助备份标识将存储网络划分为两个虚拟存储空间,分别实现数据和备份数据的检测与存储,满足了用户备份需求。安全性分析和性能分析表明,该方案具有可行性,保障了检测结果的真实性,并提高了数据存储的可靠性。 相似文献
68.
针对当前支持去重的属性加密方案既不支持云存储数据审计,又不支持过期用户撤销,且去重搜索和用户解密效率较低的问题,该文提出一种支持高效去重和审计的属性加密方案。该方案引入了第3方审计者对云存储数据的完整性进行检验,利用代理辅助用户撤销机制对过期用户进行撤销,又提出高效去重搜索树技术来提高去重搜索效率,并通过代理解密机制辅助用户解密。安全性分析表明该方案通过采用混合云架构,在公有云达到IND-CPA安全性,在私有云达到PRV-CDA安全性。性能分析表明该方案的去重搜索效率更高,用户的解密计算量较小。
相似文献69.
The rapid development of cloud computing and big data technology brings prople to enter the era of big data,more and more enterprises and individuals outsource their data to the cloud service providers.The explosive growth of data and data replicas as well as the increasing management overhead bring a big challenge to the cloud storage space.Meanwhile,some serious issues such as the privacy disclosure,authorized access,secure deduplication,rekeying and permission revocation should also be taken into account.In order to address these problems,a role-based symmetric encryption algorithm was proposed,which established a mapping relation between roles and role keys.Moreover,a secure deduplication scheme was proposed via role-based symmetric encryption to achieve both the privacy protection and the authorized deduplication under the hierarchical architecture in the cloud computing environment.Furthermore,in the proposed scheme,the group key agreement protocol was utilized to achieve rekeying and permission revocation.Finally,the security analysis shows that the proposed role-based symmetric encryption algorithm is provably secure under the standard model,and the deduplication scheme can meet the security requirements.The performance analysis and experimental results indicate that the proposed scheme is effective and efficient. 相似文献
70.
Cloud storage applications quickly become the best choice of the personal user and enterprise storage with its convenience,scalability and other advantages,secure deduplication and integrity auditing are key issues for cloud storage.At first,convergent key encapsulation/decoupling algorithm based on blind signature was set up,which could securely store key and enable it to deduplicate.Besides,a BLS signature algorithm based on convergence key was provided and use TTP to store public key and proxy audit which enables signature and pubic key deduplication and reduces client storage and computing overhead.Finally,cloud-based secure deduplicaion and integrity audit system was designed and implemented.It offered user with data privacy protection,deduplication authentication,audit authentication services and lowered client and cloud computation overhead. 相似文献