共查询到19条相似文献,搜索用时 78 毫秒
1.
2.
基于蚂蚁算法的数据网格副本选择策略 总被引:3,自引:0,他引:3
在分布着大量数据和计算能力的数据网格环境中,采用数据副本是提高网格应用可用性的重要方法。如何对数据网格中大量的数据副本进行优化选择是影响数据网格性能的重要因素。因此提出一种基于蚂蚁算法的数据网格副本选择策略,并在网格仿真器OptorSim中对该算法进行实现及性能分析。仿真实验结果表明该算法可以减少数据访问延迟及带宽消耗,并有效做到网格中存储节点间的负载平衡。 相似文献
3.
副本管理成为影响数据网格性能的主要因素之一,研究高效的副本管理算法大都依赖于对数据网格副本管理进行仿真.介绍了一种数据网格副本管理仿真软件的设计与实现,并详细介绍了数据网格仿真的一些关键技术的解决方案,如任务调度、任务执行仿真. 相似文献
4.
5.
一种基于安全的网格数据副本策略模型 总被引:1,自引:0,他引:1
数据网格中数据副本的存在是为了获得对数据的更好的访问性能,同时也是为解决容错问题而采用的一种冗余技术,但系统复杂程度的增加会带来一系列不可预测的安全隐患。安全与容错是既相互统一,又相互矛盾的关系,应将它们综合起来加以研究。为此将数据副本机制与信息安全结合起来,提出一种决定数据资源副本数量的数学模型。该模型综合考虑服务提供者一方经济利益和声誉度,在合理的假设下对一个两目标的优化问题作了简化处理,通过数值计算和分析确定了数据副本数量的最佳限额。 相似文献
6.
针对数据网格中存储站点的容量限制,提出了一种基于权重的数据副本替换策略(WBRR).在网格模拟环境OptorSim上进行的模拟实验结果表明:基于权重的策略相比于传统的副本替换策略,在降低网络利用率的同时缩短了系统的响应时间,达到了提高系统性能的目的. 相似文献
7.
数据网格中,数据副本技术提高了数据的访问速度,减少了带宽的消耗.副本创建策略是数据副本研究中的重要问题之一.提出了在P2P网络环境下,一种基于滑动窗口技术的副本创建和替换策略.使用传输时间比作为副本创建和替换的依据.分析和模拟显示,该方法能在控制访问空间的同时,获得良好的性能. 相似文献
8.
蚂蚁算法在数据网格副本选择中的应用研究 总被引:1,自引:0,他引:1
数据网格中由于采用复制备份策略,文件存在多个副本.用户访问文件时,如何从拥有相同文件的多个结点中选择某一结点获取文件,达到以相同的代价获得最优质的服务,是一个迫切需要研究的问题.深入研究了蚂蚁算法的原理,分析了影响副本选择性能的主要因素,利用这些参考因素设计了基于蚂蚁算法的副本选择策略;并对这个新算法进行了分析和实现.经仿真平台实验,表明该算法可有效地减少数据访问延迟及带宽消耗,实现网格中存储节点间的负载平衡,提高数据的访问速度. 相似文献
9.
10.
数据网格环境下一种动态自适应的副本定位方法 总被引:10,自引:2,他引:10
在数据网格中,数据常常会由于性能和可用性等原因进行复制,如何有效地定位数据的一个或多个副本的物理位置是数据网格系统需要解决的重要问题,提出了一种可扩展、动态自适应的分布副本定位方法——DSRL,DSRL使用宿主结点来支持对同一数据多个副本的同时高效定位,使用本地副本定位结点来支持对副本的本地查询。DSRL提出了一种动态均衡映射方法,将全局副本定位信息均衡分布在多个宿主结点上,并且能够自适应宿主结点的动态加人或退出,详细描述了DSRL的组成,并对DSRL方法的正确性和负载平衡等特性进行了证明,分析和实验表明,DSRL方法有着良好的可扩展性、可靠性、自适应性和性能,并且实现简单,有着较好的实用性。 相似文献
11.
基于层次化调度策略和动态数据复制的网格调度方法 总被引:2,自引:0,他引:2
针对在网格中如何有效地进行任务调度和数据复制, 以便减少任务执行时间等问题, 提出了任务调度算法(ISS)和优化动态数据复制算法(ODHRA), 并构建一个方案将两种算法进行了有效结合。该方案采用ISS算法综合考虑任务等待队列的数量、任务需求数据的位置和站点的计算容量, 采用网络结构分级调度的方式, 配以适当的权重系数计算综合任务成本, 搜索出最佳计算节点区域; 采用ODHRA算法分析数据传输时间、存储访问延迟、等待在存储队列中的副本请求和节点间的距离, 在众多的副本中选取出最佳副本位置, 再结合副本放置和副本管理, 从而降低了文件访问时间。仿真结果表明, 提出的方案在平均任务执行时间方面, 与其他算法相比表现出了更好的性能。 相似文献
12.
13.
An optimal replication strategy for data grid systems 总被引:1,自引:0,他引:1
Data access latency is an important metric of system performance in data grid. By means of efficient replication strategy,
the amount of data transferred in a wide area network will decrease, and the average access latency of data will decrease
ultimately. The motivation of our research is to solve the optimized replica distribution problem in a data grid; that is,
the system should utilize many replicas for every data with storage constraints to minimize the average access latency of
data. This paper proposes a model of replication strategy in federated data grid and gives the optimized solution. The analysis
results and simulation results show that the optimized replication strategy proposed in this paper is superior to LRU caching
strategy, uniform replication strategy, proportional replication strategy and square root replication strategy in terms of
wide area network bandwidth requirement and in the average access latency of data. 相似文献
14.
15.
Nazanin SaadatAuthor Vitae Amir Masoud Rahmani Author Vitae 《Future Generation Computer Systems》2012,28(4):666-681
In recent years, grid technology has had such a fast growth that it has been used in many scientific experiments and research centers. A large number of storage elements and computational resources are combined to generate a grid which gives us shared access to extra computing power. In particular, data grid deals with data intensive applications and provides intensive resources across widely distributed communities. Data replication is an efficient way for distributing replicas among the data grids, making it possible to access similar data in different locations of the data grid. Replication reduces data access time and improves the performance of the system. In this paper, we propose a new dynamic data replication algorithm named PDDRA that optimizes the traditional algorithms. Our proposed algorithm is based on an assumption: members in a VO (Virtual Organization) have similar interests in files. Based on this assumption and also file access history, PDDRA predicts future needs of grid sites and pre-fetches a sequence of files to the requester grid site, so the next time that this site needs a file, it will be locally available. This will considerably reduce access latency, response time and bandwidth consumption. PDDRA consists of three phases: storing file access patterns, requesting a file and performing replication and pre-fetching and replacement. The algorithm was tested using a grid simulator, OptorSim developed by European Data Grid projects. The simulation results show that our proposed algorithm has better performance in comparison with other algorithms in terms of job execution time, effective network usage, total number of replications, hit ratio and percentage of storage filled. 相似文献
16.
17.
18.
Najme MANSOURI 《Frontiers of Computer Science in China》2014,(3):391-408
Data Grid integrates graphically distributed resources for solving data intensive scientific applications. Effective scheduling in Grid can reduce the amount of data transferred among nodes by submitting a job to a node, where most of the requested data files are available. Scheduling is a traditional problem in parallel and distributed system. However, due to special issues and goals of Grid, traditional approach is not effective in this environment any more. Therefore, it is necessary to propose methods specialized for this kind of parallel and distributed system. Another solution is to use a data replication strategy to create multiple copies of files and store them in convenient locations to shorten file access times. To utilize the above two concepts, in this paper we develop a job scheduling policy, called hierarchical job scheduling strategy (HJSS), and a dynamic data replication strategy, called advanced dynamic hierarchical replication strategy (ADHRS), to improve the data access efficiencies in a hierarchical Data Grid. HJSS uses hierarchical scheduling to reduce the search time for an appropriate computing node. It considers network characteristics, number of jobs waiting in queue, file locations, and disk read speed of storage drive at data sources. Moreover, due to the limited storage capacity, a good replica replacement algorithm is needed. We present a novel replacement strategy which deletes files in two steps when free space is not enough for the new replica: first, it deletes those files with minimum time for transferring. Second, if space is still insufficient then it considers the last time the replica was requested, number of access, size of replica and file transfer time. The simulation results show that our proposed algorithm has better performance in comparison with other algorithms in terms of job execution time, number of intercommunications, number of replications, hit ratio, computing resource usage and storage usage. 相似文献
19.
数据复制技术被广泛应用于数据网格中,以缩短数据访问时间和传输时间、降低网络带宽消耗.针对包含树型拓扑和环型拓扑的混合式网格拓扑结构,提出了一种考虑网络带宽、网络传输延迟、用户请求频率和站点可用存储空间大小等因素的副本创建策略,并引入评估函数衡量各因素的影响大小,具有良好的可靠性、可扩展性和自适应性.模拟实验的结果显示此副本创建策略可以有效降低数据平均访问时间. 相似文献