首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
数据集成中消息中间件的设计   总被引:8,自引:1,他引:8  
数据集成中的同步和复制是两个重要的方面,文章分析了分布式环境下的数据同步和复制方法,提出了异构数据库复制的消息中间件设计方案,它是上海交通大学数据中心建设的数据平台基础。  相似文献   

2.
在基于事件流的大规模数据密集型系统中,数据可分为事件流数据和事件配置数据两大类,配置数据表示事件流的规则.在shared-nothing结构下,配置数据一般采用全复制的方式分布到各个数据库节点,用于和事件流数据的联合查询.采用全复制的配置数据,修改操作必须在所有节点上进行,数据的一致性控制和多节点的事务处理成为此类数据管理的关键问题.对配置数据的特点及其管理策略进行分析,并成功的在DBroker系统中实现了配置数据一致性控制.  相似文献   

3.
分布式系统中数据复制的研究与应用   总被引:2,自引:1,他引:2  
张秋余  王璐 《计算机工程与设计》2005,26(5):1185-1186,1189
维持数据的一致性是复制数据库并发执行的关键。介绍分析了MS SQL Server 2000中内置的数据复制构件以及MS SQL Server 2000的事务模型,最后结合实际的应用软件开发对数据复制的并发控制原理与措施进行了相关实现。  相似文献   

4.
基于Jet型数据库的DAO编程接口提供了强大的数据库操作能力,其同步复制功能提供了开发中小型网络应用系统的基础。但是其开发文档中未提供检测与消除同步复制中的数据冲突的信息。本文基于VB讨论了DAO中同步复制的若干问题,并提出了数据冲突的检测与消除的编程解决方法,进一步完善了基于Jet型数据库DAO的功能。  相似文献   

5.
数据复制是分布式数据库系统中一项非常重要的技术,它提高了分布式数据库的容灾能力,降低了事务的响应时间。该文对数据复制技术进行了详细的研究,并介绍了数据复制在Oracle数据库中的应用。  相似文献   

6.
王升平  李青 《计算机工程》2007,33(20):83-85
根据应用需求构建了一个网络数据库间的数据复制系统。鉴于OGSA-DAI的数据提供能力的局限性,提出了通过池化OGSA-DA的Session机制来改进OGSA-DAI对外界提供连续的数据流的能力。描述了数据库变更监控以及池化Session等关键技术的实现方法。由一个数据复制的实例说明了复制框架中从数据监控到数据发布的流程。  相似文献   

7.
数据复制技术的研究   总被引:1,自引:0,他引:1  
如何解决分布式数据库由于地理位置的分布而引发的数据不一致性问题,数据复制技术以其突出的优点得到了广泛的应用,现已成为数据库技术研究领域的热点。介绍了数据复制的基本概念,并对当前主流数据库管理系统的数据工具和数据复制方法进行了深入的研究和对比。  相似文献   

8.
数据复制     
闪四清 《个人电脑》1999,5(4):169-172,174
在分布数据管理方面,与以前的版本相比,SQL Server 7.0有了显著的增强和提高。例如,第一次在数据库管理系统上集成的数据转换服务可以非常方便地在多种数据源之间转换数据。面目一新的备份和恢复功能更加完善,使用更加方便。可以把数据库中的数据信息发布到Internet上的Web助手工具,具有更大的柔性。特别是在数据复制方面,从复制的功能到复制的实现和监测等多方面都有了很大的增  相似文献   

9.
基于Oracle9 i分布式数据库系统复制机制的研究   总被引:7,自引:0,他引:7  
在大规模的分布式数据库系统中,数据库复制对于系统的可靠性及效率起着非常关键的作用。论文首先讨论了分布式数据库的数据库复制技术,然后介绍了Oracle9i数据复制机制原理,并对Oracle9i数据复制机制在实践工程中的应用给出了具体的实例。  相似文献   

10.
数据复制是分布式数据库系统中一项非常重要的技术,它提高了分布式数据库的容灾能力,降低了事务的响应时间。该文对数据复制技术进行了详细的研究,并介绍了数据复制在Oracle数据库中的应用。  相似文献   

11.
陈江山  康慕宁  李兰兰 《微处理机》2007,28(6):59-62,66
数据更新流程是远程复制系统的框架。针对不同的应用需求,系统可以采用不同的复制模式。文中主要讨论异步模式下的数据更新流程。在分析借鉴了已有的异步复制协议基础上,提出了一种改进性能的异步复制流程。原型实验表明,此数据更新流程的设计在保持数据的一致性以及系统简单、可靠性的同时,也降低了复制过程中对网络带宽的需求和I/O操作的数量。  相似文献   

12.
A dynamic data replication strategy using access-weights in data grids   总被引:2,自引:0,他引:2  
Data grids deal with a huge amount of data regularly. It is a fundamental challenge to ensure efficient accesses to such widely distributed data sets. Creating replicas to a suitable site by data replication strategy can increase the system performance. It shortens the data access time and reduces bandwidth consumption. In this paper, a dynamic data replication mechanism called Latest Access Largest Weight (LALW) is proposed. LALW selects a popular file for replication and calculates a suitable number of copies and grid sites for replication. By associating a different weight to each historical data access record, the importance of each record is differentiated. A more recent data access record has a larger weight. It indicates that the record is more pertinent to the current situation of data access. A Grid simulator, OptorSim, is used to evaluate the performance of this dynamic replication strategy. The simulation results show that LALW successfully increases the effective network usage. It means that the LALW replication strategy can find out a popular file and replicates it to a suitable site without increasing the network burden too much.
Ruay-Shiung ChangEmail:
  相似文献   

13.
The Data Grid provides massive aggregated computing resources and distributed storage space to deal with data-intensive applications. Due to the limitation of available resources in the grid as well as production of large volumes of data, efficient use of the Grid resources becomes an important challenge. Data replication is a key optimization technique for reducing access latency and managing large data by storing data in a wise manner. Effective scheduling in the Grid can reduce the amount of data transferred among nodes by submitting a job to a node where most of the requested data files are available. In this paper two strategies are proposed, first a novel job scheduling strategy called Weighted Scheduling Strategy (WSS) that uses hierarchical scheduling to reduce the search time for an appropriate computing node. It considers the number of jobs waiting in a queue, the location of the required data for the job and the computing capacity of the sites Second, a dynamic data replication strategy, called Enhanced Dynamic Hierarchical Replication (EDHR) that improves file access time. This strategy is an enhanced version of the Dynamic Hierarchical Replication strategy. It uses an economic model for file deletion when there is not enough space for the replica. The economic model is based on the future value of a data file. Best replica placement plays an important role for obtaining maximum benefit from replication as well as reducing storage cost and mean job execution time. So, it is considered in this paper. The proposed strategies are implemented by OptorSim, the European Data Grid simulator. Experiment results show that the proposed strategies achieve better performance by minimizing the data access time and avoiding unnecessary replication.  相似文献   

14.
In recent years, grid technology has had such a fast growth that it has been used in many scientific experiments and research centers. A large number of storage elements and computational resources are combined to generate a grid which gives us shared access to extra computing power. In particular, data grid deals with data intensive applications and provides intensive resources across widely distributed communities. Data replication is an efficient way for distributing replicas among the data grids, making it possible to access similar data in different locations of the data grid. Replication reduces data access time and improves the performance of the system. In this paper, we propose a new dynamic data replication algorithm named PDDRA that optimizes the traditional algorithms. Our proposed algorithm is based on an assumption: members in a VO (Virtual Organization) have similar interests in files. Based on this assumption and also file access history, PDDRA predicts future needs of grid sites and pre-fetches a sequence of files to the requester grid site, so the next time that this site needs a file, it will be locally available. This will considerably reduce access latency, response time and bandwidth consumption. PDDRA consists of three phases: storing file access patterns, requesting a file and performing replication and pre-fetching and replacement. The algorithm was tested using a grid simulator, OptorSim developed by European Data Grid projects. The simulation results show that our proposed algorithm has better performance in comparison with other algorithms in terms of job execution time, effective network usage, total number of replications, hit ratio and percentage of storage filled.  相似文献   

15.
A mobile ad hoc network (MANET) is a network that allows mobile servers and clients to communicate in the absence of a fixed infrastructure. MANET is a fast growing area of research as it finds use in a variety of applications. In order to facilitate efficient data access and update, databases are deployed on MANETs. These databases that operate on MANETs are referred to as MANET databases. Since data availability in MANETs is affected by the mobility and power constraints of the servers and clients, data in MANETs are replicated. A number of data replication techniques have been proposed for MANET databases. This paper identifies issues involved in MANET data replication and attempts to classify existing MANET data replication techniques based on the issues they address. The attributes of the replication techniques are also tabulated to facilitate a feature comparison of the existing MANET data replication works. Parameters and performance metrics are also presented to measure the performance of MANET replication techniques. In addition, this paper also proposes criteria for selecting appropriate data replication techniques for various application requirements. Finally, the paper concludes with a discussion on future research directions.  相似文献   

16.
本文介绍了数据复制技术的概念,着重介绍了MSSQLServer7.0几种复制技术的优缺点及应用场合,并结合实际工作中遇到的问题,提出了相应的解决方案,对于MSSQLServer7.0数据复制技术在MIS中的应用,进行了理论践上的探讨。  相似文献   

17.
Data grid is a distributed collection of storage and computational resources that are not bounded within a geophysical location. It is a fast growing area of research and providing efficient data access and maximum data availability is a challenging task. To achieve this task, data is replicated to different sites. A number of data replication techniques have been presented for data grids. All replication techniques address some attributes like fault tolerance, scalability, improved bandwidth consumption, performance, storage consumption, data access time etc. In this paper, different issues involved in data replication are identified and different replication techniques are studied to find out which attributes are addressed in a given technique and which are ignored. A tabular representation of all those parameters is presented to facilitate the future comparison of dynamic replication techniques. The paper also includes some discussion about future work in this direction by identifying some open research problems.  相似文献   

18.
信息技术的进步推动了OA系统特别是远程OA系统的应用,网络流量的限制使Internet日益变成了OA系统的瓶颈。针对Internet瓶颈问题提出了OA系统的数据分布策略,对数据复制操作进行了研究,分析了数据复制的原理及冲突解决策略,最后设计了一种基于Agent的透明复制模型并进行了实际应用。  相似文献   

19.
Data replication, as an essential service for MANETs, is used to increase data availability by creating local or nearly located copies of frequently used items, reduce communication overhead, achieve fault-tolerance and load balancing. Data replication protocols proposed for MANETs are often prone to scalability problems due to their definitions or underlying routing protocols they are based on. In particular, they exhibit poor performance when the network size is scaled up. However, scalability is an important criterion for several MANET applications. We propose a scalable and reactive data replication approach, named SCALAR, combined with a low-cost data lookup protocol. SCALAR is a virtual backbone based solution, in which the network nodes construct a connected dominating set based on network topology graph. To the best of our knowledge, SCALAR is the first work applying virtual backbone structure to operate a data lookup and replication process in MANETs. Theoretical message-complexity analysis of the proposed protocols is given. Extensive simulations are performed to analyze and compare the behavior of SCALAR, and it is shown to outperform the other solutions in terms of data accessibility, message overhead and query deepness. It is also demonstrated as an efficient solution for high-density, high-load, large-scale mobile ad hoc networks.  相似文献   

20.
基于Oracle高级复制的分布式数据库系统应用研究   总被引:16,自引:1,他引:16  
该文以CIMS工程应用为背景,介绍了一种运用Oracle高级复制技术实现企业远程信息分布式处理的方案;详细讨论了实现中的数据一致性问题,并提出了通过设置手动复制、合理设置自动复制周期解决数据弱一致性和处理时间滞后问题的方法,以及基于分组方法解决各种数据冲突的实现策略。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号