共查询到18条相似文献,搜索用时 125 毫秒
1.
2.
通过比较和分析数据网格环境下aggressive-copy和lazy-copy两个副本一致性协议,针对两个协议的不足,在扩展副本一致性框架的基础上提出了一种新的副本一致性lazy_agg-copy算法。该算法在多层框架下随机选择部分副本进行一致性更新,可以弥补已有两种算法的缺点。将lazy_agg-copy算法和已有两种在网格模拟软件OptorSim下进行了模拟。模拟结果表明,lazy_agg-copy算法在实时性、网络负载和带宽消耗方面都能取得更好的均衡,可以达到更优的整体效果。 相似文献
3.
4.
数据网格中服务质量感知的副本放置方法 总被引:1,自引:0,他引:1
可靠的服务质量是数据网格应用的关键因素之一.服务质量感知的副本放置问题在传统模型中加入个体服务质量约束,可满足QoS严格的数据网格应用需求.针对现有副本放置算法不支持多属性约束、可扩展性不高等缺点,提出一种基于矩阵操作加速的3阶段副本放置算法TP-GABMAC算法,并引入副本环处理副本更新和一致性维护问题.分析和实验表明,TP-GABMAC算法具有稳定性好和可扩展性强的特点,在多种网络拓扑、访问模式和负载条件下均能获得合理的副本策略. 相似文献
5.
个体QoS受限的数据网格副本管理与更新方法 总被引:1,自引:0,他引:1
数据网格系统通常采用副本技术提高系统总体性能,传统副本放置技术通过总体QoS需求确定副本数目和部署方式.针对QoS需求严格的一类数据网格应用,建立了个体服务质量受限的数据网格模型IQDG,提出一种启发式个体QoS受限的副本放置算法qGREP和基于逻辑环结构的一致性维护方法.IQDG采用的启发信息综合考虑了个体QoS约束的满足和副本开销的控制,能获得合理的副本策略.理论分析论证了算法的正确性和收敛性,模拟实验结果表明了算法能有效解决个体QoS受限的副本放置问题,在多种网络拓扑、访问模式和负载条件下均能取得较好的访问效果. 相似文献
6.
提出了一个基于域的自适应副本选择模型DARSM(domain based adaptive replica selection model).该模型将组件副本划分为强一致性域和弱一致性域,域间通过一致性窗口机制进行状态同步.基于DARSM模型,给出了一种基于分区加权的自适应副本选择算法PWARS(partition-weighted based adaptive replica selection,).该算法利用动态性能度量信息来选择满足时间约束和一致性约束的组件副本集合.为了适应请求一致性约束的动态变化,还提出了一致性窗口自适应重配算法CWAR(consistency window adaptive reconfiguration).通过引入的一个一致性约束的可能性模型,该算法动态地对一致性窗口进行重配,从而实现了副本一致性的自适应控制.通过在OnceAS应用服务器集群中的原型实验及性能评价,表明该方法能够明显地提高副本选择的性能. 相似文献
7.
8.
为了在数据网格环境中不增加副本存储空间的条件下,能够很好地进行数据副本的淘汰,提出了一种改进副本淘汰算法.该算法利用权重函数兼顾访问时间和访问频率,在考虑副本传输代价的因素上引入动态调整因子μ,根据实际情况动态的调整副本传榆代价所占的比例.仿真实验结果表明,该算法在副本尺寸差异较大的情况下,可以大大减少副本淘汰误差,提高了网格结点的作业平均执行时间和网络有效利用率. 相似文献
9.
提出了一种基于访问频率的副本创建策略。该策略主要依据网格用户对文件副本的访问频率进行副本创建,在替换副本时也依据频率值,将不经常访问的副本删除。这种策略能够很好地满足用户访问所需副本的要求,并能提高副本的传输速率与带宽的利用率。文章根据网格结构的特点和算法的环境要求对网格模拟器OptorSim的模块进行了改进,并对该算法进行了测试。测试结果表明,基于访问频率的副本创建算法提高了用户访问副本的效率。 相似文献
10.
在网格中,数据常常由于性能和可用性等原因进行复制.如何有效地管理网格环境中的各副本是网格系统中需要解决的重要问题.提出了一种可扩展及动态自适应的副本管理拓扑结构——混杂网格副本管理拓扑结构(HGRMT).HGRMT将传统树状的网格逻辑结构中的各节点,在不同层次上又连接成环状,并在此基础上提出了层间与组内不同的副本创建算法.实验表明,HGRMT有较好的可扩展性、可靠性和自适应性. 相似文献
11.
The expiration-based scheme is widely used to manage the consistency of cached and replicated contents such as Web objects. In this approach, each replica is associated with an expiration time beyond which the replica has to be validated. While the expiration-based scheme has been investigated in the context of a single replica, not much work has been done on its behaviors with respect to multiple replicas. To allow for efficient consistency management, it is desirable to organize the replicas into a distribution tree where a lower level replica seeks validation with a higher level replica when its lifetime expires. This paper investigates the construction of a distribution tree for a given set of replicas with the objective of minimizing the total communication cost of consistency management. This is formulated as an optimization problem and is proven to be NP-complete. The optimal distribution tree is identified in some special cases and several heuristic algorithms are proposed for the general problem. The performance of the heuristic algorithms is experimentally evaluated against two classical graph-theoretic algorithms of tree construction: the shortest-paths tree and the minimum spanning tree. 相似文献
12.
Free energy surfaces, calculated during computer simulations, are known to be useful in characterizing the system of interest such as bio-molecules. However, it is usually very difficult to evaluate free energy from direct simulations, mainly because of high computational costs. Several simulation strategies, including replica exchange method (REM), have been developed to overcome this problem by providing efficient conformational sampling methods. Even with such efficient simulation schemes, fundamental questions concerning simulation convergence still remain to be resolved. In this paper, we propose to use a meta-distance between different free energy surfaces as one of the minimal measures for determining simulation consistency. This method is used for examining free energy surfaces obtained from folding simulations of a synthetic 11-residue protein (1AQG) using REM. 相似文献
13.
14.
Wenbing Zhao 《International Journal of Parallel, Emergent and Distributed Systems》2016,31(3):254-267
The primary concern of traditional Byzantine fault tolerance is to ensure strong replica consistency by executing incoming requests sequentially according to a total order. Speculative execution at both clients and server replicas has been proposed as a way of reducing the end-to-end latency. In this article, we introduce optimistic Byzantine fault tolerance. Optimistic Byzantine fault tolerance aims to achieve higher throughput and lower end-to-end latency by using a weaker replica consistency model. Instead of ensuring strong safety as in traditional Byzantine fault tolerance, nonfaulty replicas are brought to a consistent state periodically and on-demand in optimistic Byzantine fault tolerance. Not all applications are suitable for optimistic Byzantine fault tolerance. We identify three types of applications, namely, realtime collaborative editing, event stream processing, and services constructed with conflict-free replicated data types, as good candidates for applying optimistic Byzantine fault tolerance. Furthermore, we provide a design guideline on how to achieve eventual consistency and how to recover from conflicts at different replicas. In optimistic Byzantine fault tolerance, a replica executes a request immediately without first establishing a total order of the message, and Byzantine agreement is used only to establish a common state synchronization point and the set of individual states needed to resolve conflicts. The recovery mechanism ensures both replica consistency and the validity of the system by identifying and removing the operations introduced by faulty clients and server replicas. 相似文献
15.
T. W. Page R. G. Guy J. S. Heidemann D. H. Ratner P. L. Reiher A. Goel G. H. Kuenning G. J. Popek 《Software》1998,28(2):155-180
This research proposes and tests an approach to engineering distributed file systems that are aimed at wide-scale, Internet-based use. The premise is that replication is essential to deliver performance and availability, yet the traditional conservative replica consistency algorithms do not scale to this environment. Our Ficus replicated file system uses a single-copy availability, optimistic update policy with reconciliation algorithms that reliably detect concurrent updates and automatically restore the consistency of directory replicas. The system uses the peer-to-peer model in which all machines are architectural equals but still permits configuration in a client-server arrangement where appropriate. Ficus has been used for six years at several geographically scattered installations. This paper details and evaluates the use of optimistic replica consistency, automatic update conflict detection and repair, the peer-to-peer (as opposed to client-server) interaction model, and the stackable file system architecture in the design and construction of Ficus. The paper concludes with a number of lessons learned from the experience of designing, building, measuring, and living with an optimistically replicated file system. © 1998 John Wiley & Sons, Ltd. 相似文献
16.
本文提出一种简洁实用的元数据副本一致性管理新模型MRCC,该模型引入调度器对元数据服务器进行集中管理,不仅可以使系统达到更好的扩展性和可用性,而且可以灵活地实现对元数据副本的一致性控制,更好地发挥元数据副本容错和负载均衡的作用。 相似文献
17.
18.
Tang Xueyan Xu Jianliang Lee Wang-Chien 《Parallel and Distributed Systems, IEEE Transactions on》2008,19(12):1683-1694
Consistency maintenance is important to the sharing of dynamic contents in peer-to-peer (P2P) networks. The TTL-based mechanism is a natural choice for maintaining freshness in P2P content sharing. This paper investigates TTL-based consistency maintenance in unstructured P2P networks. In this approach, each replica is assigned an expiration time beyond which the replica stops serving new requests unless it is validated. While TTL-based consistency is widely explored in many client-server applications, there has been no study on TTL-based consistency in P2P networks. Our main contribution is an analytical model that studies the search performance and the freshness of P2P content sharing under TTL-based consistency. Due to the random nature of request routing, P2P networks are fundamentally different from most existing TTL-based systems in that every node with a valid replica has the potential to serve any other node. We identify and discuss the factors that affect the performance of P2P content sharing under TTL-based consistency. Our results indicate a tradeoff between search performance and freshness: the search cost decreases sublinearly with decreasing freshness of P2P content sharing. We also compare two types of unstructured P2P networks and find that clustered P2P networks improve the freshness of content sharing over flat P2P networks under TTL-based consistency. 相似文献