首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
For security and efficiency problems in threshold based deduplication for cloud data,a novel method based on threshold re-encryption was proposed to deal with side channel attacks.A lightweight threshold re-encryption mechanism was presented to transfer the secondary encryption to the cloud for execution and allow clients to generate ciphertext based on key segmentation instead of ciphertext segmentation,both of which largely reduce computational overhead of clients.Also,the proposed mechanism enables clients to decrypt from both one-time encrypted and re-encrypted ciphertext,thus avoiding the overhead of redundant encryption of the same file.Mutual integrity verification between cloud service provider and clients was also supported by the proposed method,which directly ensured the correctness of the correspondence between ciphertext and plaintext on client side.Experiments show that the proposed method not only largely reduces the computational overhead on client side,but also achieves superior storage performance on cloud side simultaneously.  相似文献   

2.
在云计算和数据中心环境中,底层单个物理服务器的失效将对上层虚拟网络的服务性能造成很大的影响,现有利用冗余备份的方法能够在一定程度上降低底层物理设备失效带来的影响,但未考虑到物理服务器的同构性所带来的问题,为此,该文提出一种异构备份式的虚拟网映射方法。首先,只对关键的虚拟机进行冗余备份,降低备份资源的开销;然后,确保提供备份虚拟机的物理服务器与原物理服务器的系统类型的异构性,提高虚拟网的弹性能力;最后,以最小化链路资源开销作为虚拟网的映射目标,进一步降低备份资源的开销。实验表明,该方法在保证虚拟网络映射性能的前提下,能够大大提高虚拟网络的弹性能力。  相似文献   

3.
The rapid development of cloud computing and big data technology brings prople to enter the era of big data,more and more enterprises and individuals outsource their data to the cloud service providers.The explosive growth of data and data replicas as well as the increasing management overhead bring a big challenge to the cloud storage space.Meanwhile,some serious issues such as the privacy disclosure,authorized access,secure deduplication,rekeying and permission revocation should also be taken into account.In order to address these problems,a role-based symmetric encryption algorithm was proposed,which established a mapping relation between roles and role keys.Moreover,a secure deduplication scheme was proposed via role-based symmetric encryption to achieve both the privacy protection and the authorized deduplication under the hierarchical architecture in the cloud computing environment.Furthermore,in the proposed scheme,the group key agreement protocol was utilized to achieve rekeying and permission revocation.Finally,the security analysis shows that the proposed role-based symmetric encryption algorithm is provably secure under the standard model,and the deduplication scheme can meet the security requirements.The performance analysis and experimental results indicate that the proposed scheme is effective and efficient.  相似文献   

4.
Cross-user deduplication is an emerging technique to eliminate redundant uploading in cloud storage. Its deterministic response indicating the existence of data creates a side channel to attackers, which makes the privacy in the cloud at risk. Such kind of attack as well as further appending chunks attack, still cannot be well resisted in current solutions, thus is becoming a big obstacle in using this technique. We propose a secure cross-user deduplication, called Request merging based deduplication scheme (RMDS), which takes the lead to consider resistance against appending chunks attack in a lightweight way, let alone side channel attack. We utilize the proposed XOR based chunk-level server-side storage structure together with a request merging strategy to obfuscate attackers in minimized communication overhead. The experiment results show that, with security guaranteed, the proposed scheme is more efficient comparing with the state of the art.  相似文献   

5.
针对当前支持去重的属性加密方案既不支持云存储数据审计,又不支持过期用户撤销,且去重搜索和用户解密效率较低的问题,该文提出一种支持高效去重和审计的属性加密方案。该方案引入了第3方审计者对云存储数据的完整性进行检验,利用代理辅助用户撤销机制对过期用户进行撤销,又提出高效去重搜索树技术来提高去重搜索效率,并通过代理解密机制辅助用户解密。安全性分析表明该方案通过采用混合云架构,在公有云达到IND-CPA安全性,在私有云达到PRV-CDA安全性。性能分析表明该方案的去重搜索效率更高,用户的解密计算量较小。  相似文献   

6.
Cloud storage applications quickly become the best choice of the personal user and enterprise storage with its convenience,scalability and other advantages,secure deduplication and integrity auditing are key issues for cloud storage.At first,convergent key encapsulation/decoupling algorithm based on blind signature was set up,which could securely store key and enable it to deduplicate.Besides,a BLS signature algorithm based on convergence key was provided and use TTP to store public key and proxy audit which enables signature and pubic key deduplication and reduces client storage and computing overhead.Finally,cloud-based secure deduplicaion and integrity audit system was designed and implemented.It offered user with data privacy protection,deduplication authentication,audit authentication services and lowered client and cloud computation overhead.  相似文献   

7.
随着传统行业电子化的速度日渐加快,电脑上的数据量与日俱增,数据备份领域面临的挑战也越来越大.传统备份因为需要大量的磁盘阵列作为存储介质,所以在成本控制上一直是个难题.针对该情况,结合云存储平台低廉的成本、高效的资源伸缩与利用率,研究在HDFS系统中加入新的重复数据删除技术,并对原有备份策略进行优化,设计出一种基于云存储中的重复数据删除技术的备份系统.最后通过实验,对改进后的方案系统与传统备份方案备份文件所占空间、时间等参数进行了对比.  相似文献   

8.
Ciphertext-policy attribute-based searchable encryption (CP-ABSE) can achieve fine-grained access control for data sharing and retrieval, and secure deduplication can save storage space by eliminating duplicate copies. However, there are seldom schemes supporting both searchable encryption and secure deduplication. In this paper, a large universe CP-ABSE scheme supporting secure block-level deduplication are proposed under a hybrid cloud mechanism. In the proposed scheme, after the ciphertext is inserted into bloom filter tree (BFT), private cloud can perform fine-grained deduplication efficiently by matching tags, and public cloud can search efficiently using homomorphic searchable method and keywords matching. Finally, the proposed scheme can achieve privacy under chosen distribution attacks block-level (PRV-CDA-B) secure deduplication and match-concealing (MC) searchable security. Compared with existing schemes, the proposed scheme has the advantage in supporting fine-grained access control, block-level deduplication and efficient search, simultaneously.  相似文献   

9.
李建江  马占宁  张凯 《电子学报》2019,47(5):1094-1100
在过去的数十年中,信息数据量呈现指数级增长,如何存储和保护这些大量信息数据成为一个难题.云存储和冗余去重技术成为解决上述难题的主要技术.去冗技术在云存储系统中得到广泛应用,但主流的云存储系统存在索引信息的膨胀以及数据分块的不确定性等不足,而这些弊端会导致内存空间的浪费和数据分块的不可预知性.针对这些问题,提出了一种基于内容分块的层次化去冗优化策略,并构建了对应的算法,解决了云存储系统中索引信息表过大和数据分块过大或过小的问题.并且选取CNN新闻的页面内容作为测试集进行实际测试,通过比较去冗比和去冗时间可以看出,相比于目前主流的去冗策略,本文提出的基于内容分块的层次化去冗优化策略能够提升3%左右的去冗比,同时降低2%左右的去冗时间.  相似文献   

10.
In order to improve the efficiency of cloud storage and save the communication bandwidth, a deduplication mechanism for multi-duplicate of the same data in cloud environment was needed. However, the implement of the secure data deduplication was seriously hindered by the ciphertext in cloud. This issue has quickly aroused wide attention of academia and industry, and became a research hotspot. From a security standpoint, firstly the primary cause and the main challenges of secure data deduplication in cloud environment was analyzed, and then the deduplication system model as well as its security model was described. Furthermore, focusing on the realization mechanism of secure data deduplica-tion, the thorough analyses were carried on and reviews for the related research works in recent years from content-based encryption, proof of ownership and privacy protection for secure deduplication, then the advantages and common prob-lems of various key technologies and methods were summed up. Finally, the future research directions and development trends on secure data deduplication in cloud was given.  相似文献   

11.
For the problems of key-exposure,encrypted data duplication and integrity auditing in cloud data storage,a public auditing scheme was proposed to support key update and encrypted data deduplication.Utilizing Bloom filters,the proposed scheme could achieve client-side deduplication,and guaranteed that the key exposure in one time period didn’t effect the users’ private key in other time periods.The proposed scheme could solve the conflict between key-exposure resilient and encrypted data deduplication in public auditing scheme for the first time.Security analysis indicates that the proposed scheme is strong key-exposure resilient,confidentiality,detectability,and unforgeability of authentication tags and tokens under the computation Diffie-Hellman hardness assumption in the random oracle model.  相似文献   

12.
随着重复数据删除次数的增加,系统中用于存储指纹索引的清单文件等元数据信息会不断累积,导致不可忽视的存储资源开销。因此,如何在不影响重复数据删除率的基础上,对重复数据删除过程中产生的元数据信息进行压缩,从而减小查重索引,是进一步提高重复数据删除效率和存储资源利用率的重要因素。针对查重元数据中存在大量冗余数据,提出了一种基于压缩近邻的查重元数据去冗算法Dedup2。该算法先利用聚类算法将查重元数据分为若干类,然后利用压缩近邻算法消除查重元数据中相似度较高的数据以获得查重子集,并在该查重子集上利用文件相似性对数据对象进行重复数据删除操作。实验结果表明,Dedup2可以在保持近似的重复数据删除比的基础上,将查重索引大小压缩50%以上。  相似文献   

13.
跨广域网的虚拟机实时迁移是多数据中心云计算环境的重要技术支撑。当前跨广域网的虚拟机实时迁移受到带宽小和无共享存储的限制而面临着技术挑战,如镜像数据迁移的安全性和一致性问题。为此,该文提出基于哈希图(HashGraph)的跨数据中心虚拟机实时迁移方法,运用去中心化的思想,实现数据中心之间可靠和高效的镜像信息分布式共享。通过HashGraph中Merkle DAG存储结构,改善了重复数据删除在跨数据中心迁移虚拟机镜像时的缺陷。与现有方法相比,该文方法缩短了总迁移时间。  相似文献   

14.
付安民  宋建业  苏铓  李帅 《电子学报》2017,45(12):2863-2872
云存储环境下,客户端数据去重能在本地进行文件重复性检测,有效地节约存储空间和网络带宽.然而,客户端去重仍面临着很多安全挑战.首先,由于将文件哈希值作为重复性检测的证据,攻击者很可能通过一个文件的哈希值获得整个文件;其次,为了保护数据隐私,收敛加密被广泛运用于数据去重方案,但是由于数据本身是可预测的,所以收敛加密仍不可避免地遭受暴力字典攻击.为了解决上述问题,本文首次利用盲签名构造了一个安全的密钥生成协议,通过引入一个密钥服务器,实现了对收敛密钥的二次加密,有效地预防了暴力字典攻击;并进一步提出了一个基于块密钥签名的拥有权证明方法,能够有效预防攻击者通过单一的哈希值来获取文件,并能同时实现对密文文件的文件级和块级去重.同时,安全分析表明本文方案在随机预言模型下是可证明安全的,并能够满足收敛密钥安全、标签一致性和抗暴力字典攻击等更多安全属性.此外,与现有方案相比,实验结果表明本文方案在文件上传和文件去重方面的计算开销相对较小.  相似文献   

15.
针对云提供商在保证数据可靠性的基础上,尽可能地降低自身的数据容灾成本这一需求,提出了一种基于"富云"的数据容灾策略——RCDDRS,该策略能够实现动态多目标调度,即在云提供商本身存储资源有限的情况下,合理地选择其他云提供商的资源储存数据备份,使得数据容灾成本尽可能的低且出现灾难后的恢复时间尽可能的短。仿真结果证明了该策略的可行性和有效性。  相似文献   

16.

Large amount of data is generated each second around the world. Along with this technology is evolving each day to handle and store such enormous data in an efficient way. Even with the leap in technology, providing storage space for the gigantic data generated per second globally poses a conundrum. One of the main problems, which the storage server or data center are facing is data redundancy. Same data or slightly modified data is stored in the same storage server repeatedly. Thus, the same data is occupying more space due to its multiple copies. To overcome this issue, Data deduplication can be done. Existing deduplication techniques lacks in identifying the data in an efficient way. In our proposed method we are using a mixed mode analytical architecture to address the data deduplication. For that, three level of mapping is introduced. Each level deals with various aspects of the data and the operations carried out to get unique sets of data into the cloud server. The point of focus is effectively solving deduplication to rule out the duplicated data and optimizing the data storage in the cloud server.

  相似文献   

17.
Deduplication is widely used in cloud storage service to save bandwidth and storage resources,however,the security of client deduplication still flaws in an external attack to access a user’s private data.Xu-CDE,a deduplication solution of encrypting data for multi-client was first proposed,which could protect the privacy of data from the external attackers and honest but curious server,with favorable theoretical meaning and representativeness.However,in Xu-CDE,the user ownership authentication credentials were lack of instantaneity protection,which could not resist replay attack.As an improvement to the flaw,the protocol MRN-CDE (MLE based and random number modified client-side deduplication of encrypted data in cloud storage) was proposed,adding random number in order to ensure the instantaneity of the authentication credentials,and using the algorithm of MLE-KPto extract key from original file to replace the file itself as an encryption key.As a consequence,the new protocol improved security while significantly reduced the amount of computation.After the safety analysis and the actual tests,results show that based on Xu-CDE,the proposed protocol MRN-CDE has stronger security of ownership,and improves time efficiency.Specially,the new protocol works better on large files in cloud with a certain value.  相似文献   

18.
随着信息技术的不断发展,大量数据给存储和传输都带来了巨大的挑战。数据压缩能够有效减少数据量,方便数据的处理和传输。无损压缩是一种利用数据的冗余特点进行压缩的压缩方法,解压时可以完全还原数据而不会有任何失真。在研究LZO算法的快速解压原理基础上,设计了一种新的压缩算法。该算法通过减少压缩数据中压缩块的数量,降低了解压程序的执行开销。测试结果表明,新算法可实现比LZO算法更快的解压速度。  相似文献   

19.
The backup requirement of data centres is tremendous as the size of data created by human is massive and is increasing exponentially. Single node deduplication cannot meet the increasing backup requirement of data centres. A feasible way is the deduplication cluster, which can meet it by adding storage nodes. The data routing strategy is the key of the deduplication cluster. DRSS (data routing strategy using semantics) improves the storage utilization of MCS (minimum chunk signature) data routing strategy a lot. However, for the large deduplication cluster, the load balance of DRSS is worse than MCS. To improve the load balance of DRSS, we propose a load balance strategy used for DRSS, namely DRSSLB. When a node is overloaded, DRSSLB iteratively migrates the current smallest container of the node to the smallest node in the deduplication cluster until this overloaded node becomes non-overloaded. A container is the minimum unit of data migration. Similar files sharing the same features or file names are stored in the same container. This ensures the similar data groups are still in the same node after rebalancing the nodes. We use the dataset from the real world to evaluate DRSSLB. Experimental results show that, for various numbers of nodes of the deduplication cluster, the data skews of DRSSLB are under predefined value while the storage utilizations of DRSSLB do not nearly increase compared with DRSS, with the low penalty (the data migration rate is only 6.5% when the number of nodes is 64).  相似文献   

20.
SLAM(Simultaneously Localization And Mapping)同步定位与地图构建作为移动机器人智能感知的关键技术。但是,大多已有的SLAM方法是在静止环境下实现的,当环境中存在移动频繁的障碍物时,SLAM建图会产生运动畸变,导致机器人无法进行精准的定位导航。同时,激光雷达等三维扫描设备获得的三维点云数据存在着大量的冗余三维数据点,过多的冗余数据不仅浪费大量的存储空间,同时也影响了各种点云处理算法的实时性。针对以上问题,本文提出一种SLAM运动畸变去除方法和一种基于曲率的点云数据分类简化框架。它通过激光插值法优化SLAM运动畸变,将优化后的点云数据分类简化。它能在提高SLAM建图精度,同时也很好的消除三维点云数据中特征不明显区域的冗余数据点,大大提高计算机运行效率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号