首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
针对云存储环境下外包数据面临的安全隐患,并结合现有云数据完整性验证方案的不足,提出了支持动态操作的多副本数据完整性验证方案。方案考虑了多副本应用场景,并在现有云数据完整性验证方案的基础上以较小的代价实现了文件的多副本验证,并通过引入认证的数据结构—基于等级的Merkle哈希树,实现了文件的可验证动态更新。通过对多副本进行关联,可以实现多个副本的同步更新。安全性分析与实验表明了该方案的安全性与有效性,实现了数据的安全存储与更新,并有效保证了数据多副本的隐私安全。  相似文献   

2.
黄冬梅  杜艳玲  贺琪  随宏运  李瑶 《计算机科学》2018,45(6):72-75, 104
数据的完整性和可靠性是保证其能被高效访问的关键,尤其是在云存储环境中,数据副本策略是影响系统性能和保障数据可用性的核心。从数据副本布局的角度,提出了基于多属性最优化的数据副本布局策略(Data Replica Layout Strategy based on Multiple Attribute Optimization,MAO-DRLS)。该策略根据数据的访问热度和存储节点的关键属性特点,为每个数据设置动态的副本数,并选择合适的节点对副本进行布局。实验表明,MAO-DRLS策略能够有效地提升数据副本的利用率,缩短系统的响应时间。  相似文献   

3.
基于P2P计算的动态多副本开销模型   总被引:1,自引:0,他引:1  
在多个结点上保持副本是提高P2P或网格计算环境可用性的一个有效途径。不可靠的对等点和网络环境以及用户访问模式的多样性使得难以确定多少个副本能够满足用户高可用性的需求。文章提出了一种最小开销模型来预测和动态控制副本数量。为了隐藏系统中对等点的不可靠性,在考虑多副本的同时通过一种冗余机制预测系统中的副本数量。仿真结果表明,该系统具有更好的可用性和低的副本开销。  相似文献   

4.
在云存储环境下,云服务器并不完全可信。用户如何以较低开销验证云上数据的完整性成为用户日益关心的问题。目前已提出多种保护方法,这些方法在认证多个文件时需要对文件逐一进行认证,因此当文件数很大时其计算和通信开销仍较大。针对此问题,提出一种层次式远程数据持有检测方法。该方法与远程数据持有检测方法相结合,能提供高效且安全的远程数据完整性保护,并支持动态数据操作。对提出的方法进行了安全性分析和实验评估,结果表明,提出的方法安全可靠,在较低的漏检率下,相比远程数据持有检测方法有45%~80%的性能提升。  相似文献   

5.
为检查云存储中服务提供商(CSP)是否按协议完整地存储了用户的所有数据副本,在分析并指出一个基于同态hash的数据持有性证明方案安全缺陷的基础上,对其进行了改进和扩展,提出了一个多副本持有性证明方案。为实现多副本检查,将各副本编号与文件连接后利用相同密钥加密以生成副本文件,既有效防止了CSP各服务器的合谋攻击,又简化了用户和文件的授权访问者的密钥管理;为提高检查效率,利用同态hash为数据块生成验证标签,实现了对所有副本的批量检查;为保证方案安全性,将文件标志和块位置信息添加到数据块标签中,有效防止了CSP进行替换和重放攻击。安全性证明和性能分析表明,该方案是正确和完备的,并具有计算、存储和通信负载低,以及支持公开验证等特点,从而为云存储中数据完整性检查提供了一种可行的方法。  相似文献   

6.
The expiration-based scheme is widely used to manage the consistency of cached and replicated contents such as Web objects. In this approach, each replica is associated with an expiration time beyond which the replica has to be validated. While the expiration-based scheme has been investigated in the context of a single replica, not much work has been done on its behaviors with respect to multiple replicas. To allow for efficient consistency management, it is desirable to organize the replicas into a distribution tree where a lower level replica seeks validation with a higher level replica when its lifetime expires. This paper investigates the construction of a distribution tree for a given set of replicas with the objective of minimizing the total communication cost of consistency management. This is formulated as an optimization problem and is proven to be NP-complete. The optimal distribution tree is identified in some special cases and several heuristic algorithms are proposed for the general problem. The performance of the heuristic algorithms is experimentally evaluated against two classical graph-theoretic algorithms of tree construction: the shortest-paths tree and the minimum spanning tree.  相似文献   

7.
Secure Data Objects Replication in Data Grid   总被引:1,自引:0,他引:1  
Secret sharing and erasure coding-based approaches have been used in distributed storage systems to ensure the confidentiality, integrity, and availability of critical information. To achieve performance goals in data accesses, these data fragmentation approaches can be combined with dynamic replication. In this paper, we consider data partitioning (both secret sharing and erasure coding) and dynamic replication in data grids, in which security and data access performance are critical issues. More specifically, we investigate the problem of optimal allocation of sensitive data objects that are partitioned by using secret sharing scheme or erasure coding scheme and/or replicated. The grid topology we consider consists of two layers. In the upper layer, multiple clusters form a network topology that can be represented by a general graph. The topology within each cluster is represented by a tree graph. We decompose the share replica allocation problem into two subproblems: the Optimal Intercluster Resident Set Problem (OIRSP) that determines which clusters need share replicas and the Optimal Intracluster Share Allocation Problem (OISAP) that determines the number of share replicas needed in a cluster and their placements. We develop two heuristic algorithms for the two subproblems. Experimental studies show that the heuristic algorithms achieve good performance in reducing communication cost and are close to optimal solutions.  相似文献   

8.
As an essential technology of cloud computing, the cloud storage can exactly satisfy the demand of users with the service of scalability, ubiquitous access and low maintenance cost. However, moving data to the cloud servers will bring some significant security challenges due to the loss of the physical data possession. In order to verify the data integrity, many verifiable data possession schemes have been proposed in last several years. Very recently, Tang and Zhang proposed a new publicly verifiable data possession (PVDP) scheme for remote storage. They claimed that their scheme was suitable for checking the storage correctness and secure against various types of attacks. In this paper, we analyze the security of Tang and Zhang’s PVDP scheme and prove that it is vulnerable to the data recovery attack. We also demonstrate that PVDP scheme works incorrectly with a concrete instance. Our analysis shows that their scheme is not suitable for practical applications. Our work can help cryptographers and engineers design and implement more secure and efficient public auditing schemes for the cloud storage data.  相似文献   

9.
数据副本管理是云计算系统管理的重要组成部分,在云计算系统的海量数据处理过程中,针对目前已知的数据存放与资源调度算法存在考虑副本动态性和可靠性的不足,提出了一种动态的副本放置机制。该机制基于区域结构,考虑数据处理时其副本的数量和放置位置,以及副本的产生对于内存和带宽等系统资源的开销:首先根据云存储中的副本信息,对被访问频率高且访问平均响应时间长的数据信息进行复制,并给出副本数量的计算方法;考虑缩小副本分布的节点选择范围,提出动态的副本放置算法——DRA,将一定范围内的节点根据提出的域的划分,进行放置筛选,以存放数据副本。实验结果表明,提出的动态放置机制不仅减少了低访问率副本对系统存储空间的浪费;同时也减少了高访问率副本所需跨节点的传输延迟,有效提高了云存储系统中的数据文件的访问效率、负载的均衡水平,以及云存储系统的可靠性和可用性。  相似文献   

10.
一种可扩展的动态数据持有性证明方案   总被引:1,自引:1,他引:0  
提出了一种能保护数据隐私的动态数据持有性证明方案, 并对方案的安全性进行了证明, 性能分析表明该方案具有低存储、计算和传输负载的特点。通过扩展实现了对公开可验证和多副本检查的支持, 扩展方案的特点在于公开验证中不需要可信第三方的参与, 而多副本检查可以方便地实现对所有副本的批量检查。  相似文献   

11.
Service clouds built on cloud infrastructures and service-oriented architecture provide users with a novel pattern of composing basic services to achieve complicated tasks. However, in multiple clouds environment, outsourcing data and applications pose a great challenge to information flow security for the composite services, since sensitive data may be leaked to unauthorized attackers during service composition. Although model checking has been considered as a promising approach to enforce information flow security precisely, its high complexity on modeling and the heavy cost on verification cause great burdens to the process of service composition. In this paper, we propose a distributed approach to composing services securely with information flow control. In our approach, each service component is first verified through model checking, and then a compositional verification procedure is executed to ensure the information flow security along with the composition of these services. The experimental results indicate that our approach can reduce the cost of verification compared with the global verification approach.  相似文献   

12.
In a distributed database, maintaining large table replicas with frequent asynchronous insertions is a challenging problem that requires carefully managing a tradeoff between consistency and availability. With that motivation in mind, we propose efficient algorithms to repair and measure replica consistency. Specifically, we adapt, extend and optimize distributed set reconciliation algorithms to efficiently compute the symmetric difference between replicated tables in a distributed relational database. Our novel algorithms enable fast synchronization of replicas being updated with small sets of new records, measuring obsolence of replicas having many insertions and deciding when to update a replica, as each table replica is being continuously updated in an asynchronous manner. We first present an algorithm to repair and measure distributed consistency on a large table continuously updated with new records at several sites when the number of insertions is small. We then present a complementary algorithm that enables fast synchronization of a summarization table based on foreign keys when the number of insertions is large, but happening on a few foreign key values. From a distributed systems perspective, in the first algorithm the large table with data is reconciled, whereas in the second case, its summarization table is reconciled. Both distributed database algorithms have linear communication complexity and cubic time complexity in the size of the symmetric difference between the respective table replicas they work on. That is, they are effective when the network speed is smaller than CPU speed at each site. A performance experimental evaluation with synthetic and real databases shows our algorithms are faster than a previous state-of-the art algorithm as well as more efficient than transferring complete tables, assuming large replicated tables and sporadic asynchronous insertions.  相似文献   

13.
Gaston is a peer-to-peer large-scale file system designed to provide a fault-tolerant and highly available file service for a virtually unlimited number of users. Data management in Gaston disseminates and stores replicas of files on multiple machines to achieve the requested level of data availability and uses a dynamic tree-topology structure to connect replication schema members. We present generic algorithms for replication schema creation and maintenance according to file user requirements and autonomous constraints that are set on individual nodes. We also show specific data object structure as well as mechanisms for secure and efficient update propagation among replicas with data consistency control. Finally, we introduce a scalable and efficient technique improving fault-tolerance of the tree-topology structure connecting replicas.  相似文献   

14.
沈文婷  于佳  杨光洋  程相国  郝蓉 《软件学报》2016,27(6):1451-1462
共享数据云存储完整性检测用来验证一个群体共享在云端数据的完整性,是最常见的云存储完整性检测方式之一.在云存储完整性检测中,用户用于生成数据签名的私钥可能会因为存储介质的损坏、故障等原因而无法使用.然而,目前已有的共享数据云存储完整性检测方案均没有考虑到这个现实问题.本文首次探索了如何解决共享数据云存储完整性检测中私钥不可用的问题,提出了第一个具有私钥可恢复能力的共享数据云存储完整性检测方案.在方案中,当一个群用户的私钥不可用时,可以通过群里的t个或者t个以上的用户帮助他恢复私钥.同时,设计了一个随机遮掩技术,用于确保参与成员私钥的安全性.用户也可验证被恢复私钥的正确性.最后,给出安全性和实验结果的分析,结果显示提出方案是安全高效的.  相似文献   

15.
王惠清  周雷 《计算机科学》2016,43(Z6):370-373, 409
云存储服务中,用户将数据存储在不可信的云储存服务器上,为检查云存储中服务提供商(CSP)是否按协议完整地存储了用户的所有数据副本,提出一种 支持对数据副本进行动态操作 的基于Paillier加密的数据多副本持有性验证方案, 即DMR-PDP方案。该方案为实现多副本检查,将文件块以文件副本形式存储在云服务器上,将各副本编号与文件连接后利用Paillier密码系统生成副本文件以防止CSP各服务器的合谋攻击。利用BLS签名实现对所有副本的批量验证。将文件标志和块位置信息添加到数据块标签中,以保证本方案的安全性,支持对文件的动态更新操作。安全性分析和仿真实验结果表明,该方案在安全性、通信和计算开销方面的性能优于其他文献提出的方案,极大地提高了文件存储和验证的效率,减少了计算开销。  相似文献   

16.
周婧  王意洁  李思昆 《软件学报》2007,18(6):1456-1467
针对大量数据副本所带来的资源管理问题,提出一种基于有限编码的多副本分簇管理方法.在该方法中,根据单副本复制产生新副本的过程对副本分级和分簇,通过定义"副本级别+副本顺序"的编码规则对划分后的副本进行编码和组织,并依据编码规则对由于副本的动态调整(增加或撤消)而引起的簇的动态变化进行有效管理.通过该方法,在大量副本之间建立局域集中、广域对等的管理模式,再结合定义的"最小更新传播时间"可以降低大量副本的一致性维护开销.讨论了方法中编码规则与副本规模之间的关系,以及副本失效和恢复时的解决方法.性能测试结果表明,该方法能够有效组织大规模的数据副本,具有较好的可扩展性,对适度的结点失效不敏感,适合更新频繁的应用.  相似文献   

17.
This paper presents a quorum-based replica control protocol which is resilient to network partitioning. In the best case, the protocol generates quorums of a constant size. When some replicas are inaccessible, the quorum size increases gradually and may be as large as O(n), where n is the number of replicas. However, the expected quorum size is shown to remain constant as n grows. This is a desirable property since the message cost for accessing replicated data is directly proportional to the quorum size. Moreover, the availability of the protocol is shown to be comparably high. With the two properties—constant expected quorum size and comparably high availability—the protocol is thus practical for managing replicated data.  相似文献   

18.
In contrast to other database applications, multimedia data can have a wide range of quality parameters, such as spatial and temporal resolution and compression format. Users can request data with specific quality requirements due to the needs of their application or the limitations of their resources. The database can support multiple qualities by converting data from the original (high) quality to another (lower) quality to support a user's query or precompute and store multiple quality replicas of data items. On-the-fly conversion of multimedia data (such as video transcoding) is very CPU intensive and can limit the level of concurrent access supported by the database. Storing all possible replicas, on the other hand, requires unacceptable increases in storage requirements. In this paper, we address the problem of multiple-quality replica selection subject to an overall storage constraint. We establish that the problem is NP-hard and provide heuristic solutions under two different system models: hard-quality and soft-quality. Under the soft-quality model, users are willing to negotiate their quality needs, as opposed to the hard-quality system wherein users can only accept the exact quality requested. Extensive simulations show that our algorithm performs significantly better than other heuristics. Our algorithms are flexible in that they can be extended to deal with changes in query pattern  相似文献   

19.
基于集群服务器的容灾系统的副本管理研究*   总被引:4,自引:0,他引:4  
提出一种基于集群服务器的容灾系统副本管理方案,提出多个副本的一致性维护和副本选择的算法以及副本数量和分布方式的数学模型。通过容灾系统的性能测试实验,证明它能够实现数据的快速自动恢复,有效地管理副本,并保持副本可靠性和集群服务器性能之间的平衡。  相似文献   

20.
数据网格环境下一种动态自适应的副本定位方法   总被引:10,自引:2,他引:10  
在数据网格中,数据常常会由于性能和可用性等原因进行复制,如何有效地定位数据的一个或多个副本的物理位置是数据网格系统需要解决的重要问题,提出了一种可扩展、动态自适应的分布副本定位方法——DSRL,DSRL使用宿主结点来支持对同一数据多个副本的同时高效定位,使用本地副本定位结点来支持对副本的本地查询。DSRL提出了一种动态均衡映射方法,将全局副本定位信息均衡分布在多个宿主结点上,并且能够自适应宿主结点的动态加人或退出,详细描述了DSRL的组成,并对DSRL方法的正确性和负载平衡等特性进行了证明,分析和实验表明,DSRL方法有着良好的可扩展性、可靠性、自适应性和性能,并且实现简单,有着较好的实用性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号