首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 144 毫秒
1.
移动复制数据库系统冲突检测及消解策略   总被引:9,自引:0,他引:9  
复制技术是提高移动库系统性能的一项关键技术,该文提出了一种新的移动复制数据库系统模型-事务级吉果集传递(TLRSP)移动复制模型,重点分析了该模型中的冲突检测及消解策略,并给出具体的实现算法,TLRSP移动复制模型允许移动用户在系统断连时存取数据库的本地副本并提交事务,重新连接时进行冲突的检测及消解,同时进行事务结果集的合并,最后通过增量刷新的方式进行同步处理,使得系统最终收敛于一致性的状态。此外,通过引入简化的事务日志,数据牌本号以及权限控制等概念,TLRSP模型有效地降低了移动数据库系统的资源消耗,保证了数据库的一致性,从而为移动数据库系统复制提供了一个可行的解决方案。  相似文献   

2.
数据复制是分布式数据库提高可用性的重要手段, 通过在不同区域放置数据库的部分副本, 还可以提高本地读写操作的响应速度, 增加副本数量也会提升读负载的线性扩展能力. 考虑到这些优良特性, 近年来国内外都出现了众多多副本分布式数据库系统, 包括Google Spanner、CockroachDB、TiDB、OceanBase等一系列主流的工业界系统, 也出现了包括Calvin、Aria、Berkeley Anna等一系列优秀的学术界系统. 然而, 多副本数据库带来诸多收益的同时, 也带来了一致性维护、跨节点事务、事务隔离等一系列挑战. 总结分析现有的复制架构、一致性维护策略、跨节点事务并发控制等技术, 对比几个代表性多副本数据库系统之间在分布式事务处理方面上的差异与共同点, 并在阿里云环境下搭建跨区域的分布式集群环境, 对几个代表性系统的分布式事务处理能力进行了实验测试分析.  相似文献   

3.
同步技术是保证复制移动数据库系统一致性的一项关键技术,鉴于目前移动复制同步技术存在通讯数据量大,存储空间消耗多,尤其是在网络带宽下降时,不能及时更新客户端的数据,导致移动事务执行失败等缺陷。通过UTLRSP(Union Transaction-Level Result-Set Propagation,关联事务结果集)复制同步模型结合数据广播技术,并利用基于优先级的增量更新算法实现客户端与中心数据库服务器的数据同步处理。实验结果表明,与两级复制机制相比,UTLRSP模型将事务作相关联处理,且只保存事务结果,有效降低了存贮空间的消耗,减小了同步过程中通讯数据量;基于优先级的增量更新算法根据数据新鲜度排列优先级,保证在无线网络带宽下降时新鲜度最高的数据先传输,提高了数据的传输效率、动态新鲜度以及客户端的可扩展性。  相似文献   

4.
当前入侵检测技术的不成熟限制了实时数据库系统的安全性和可用性.在传统关系数据库入侵容忍技术的基础上,针对实时数据库事务和数据的特性,提出了容忍入侵实时数据库系统的体系结构.该结构将系统分为主节点和副节点,通过事务代理机制首先由入侵检测排除恶意事务、合法事务在主节点上运行,可疑事务则在副节点上运行,然后可疑事务被再次检测,如果合法就同步到主节点,不合法就清除,该结构有效保证了事务的一致性和完整性.而且,检测与事务运行是并发的,满足了实时数据库系统的实时特性.  相似文献   

5.
复制的移动数据库系统事务级同步处理策略   总被引:11,自引:0,他引:11  
丁治明  孟小峰  王珊 《软件学报》2002,13(2):258-265
同步处理技术是保持复制的移动数据库系统一致性的一项关键技术,但现有的事务级同步处理算法存在着一定的局限性.为了克服这些缺陷,并增强其实用性,提出了一种新的移动数据库同步处理模型──基于双时间印的事务级同步(DTSTLS)模型.DTSTLS模型采用了一种三级复制体系结构,系统可以直接使用通用的数据库产品作为其数据库服务器,因此具有良好的可扩充性.作为一种异步的多主副本复制方法,DTSTLS模型允许移动计算机在断连的情况下存取本地副本,从而造成系统短暂的不一致,重新连接时进行冲突检测及同步处理,使系统重新收敛于一致性的状态.此外,通过一种独特的时间印处理策略,DTSTLS模型减少了通信代价,并降低了资源消耗.实验结果表明,DTSTLS模型提高了移动数据库系统的资源利用效率,保证了事务调度的可串行性和数据库的一致性.  相似文献   

6.
数据库事务恢复日志和入侵响应模型研究   总被引:1,自引:0,他引:1  
数据库日志记录数据元素的变迁历史,是维护数据库系统正确性和一致性的重要依据.现有的日志模式无法体现事务间依赖关系,系统在遭到恶意攻击时只得让所有数据元素恢复到出错点的状态,容忍入侵的能力差.提出一种新型的事务恢复日志模型,采用抽象状态机描述了日志生成规则和入侵响应模型,对事务之间的依赖关系进行了形式化的定义,并对入侵响应模型的完整性和正确性进行了分析.配置事务恢复日志和入侵响应机制的数据库系统在遭受攻击时,可以仅恢复受恶意事务影响的后继而无需回滚所有事务,从而提高了数据库系统的生存性.  相似文献   

7.
郑然  李战怀  王彦龙 《计算机应用》2007,27(10):2379-2382
在借鉴数据库系统以及数据容灾系统的基础上,针对现有日志技术无法解决日志大小限制的问题,设计了一种基于位图的日志溢出保护机制。围绕数据一致性、原子操作机制分析了这个溢出保护机制的实现流程,比较了该机制与传统机制的优越性。原型实验表明,该机制有效地解决了由于网络拥塞或者I/O请求数量突然增加造成的日志溢出问题,为数据复制提供了保障。  相似文献   

8.
面向更新密集型应用的内存数据库系统,其检查点技术应符合几个关键的要求,包括检查点操作对正常事务处理的干扰尽可能小、能够处理存取倾斜状况、支持数据库系统的快速恢复、提供恢复过程中的系统可用性等.该文提出一种事务一致的分区检查点技术,采用基于元组的动态多版本并发控制机制,避免了读写事务的加锁冲突,提高系统吞吐能力;检查点操作以只读事务形式实现,存多版本并发控制下,避免检查点操作对正常事务处理的堵塞;由于检查点文件是事务一致的,只需要记录事务的Redo 日志信息,在系统恢复过程中,只需要对日志文件进行一遍扫描处理,加快恢复过程;基于优先级的数据分区装载和恢复,使得恢复过程中新事务的数据存取请求迅速得到满足,保证了恢复过程中的系统可用性.由于采用两级版本管理机制以及动态版本共享技术,多版本管理的空间开销降低到可以接受的水平.实验结果表明,文中提出的检查点技术方案获得比模糊检查点技术高27%的系统吞吐量,同时版本管理的空间开销在可接受的范围之内,满足高性能应用的要求.  相似文献   

9.
数据库集群是提高数据库系统事务吞吐率,降低响应时间的有效机制。研究并实现了一种通用的无共享的数据库集群,集群由异构的节点数据库组成。系统支持水平数据划分和数据复制,系统具有性价比高,可扩展性好等特点。  相似文献   

10.
设备复制系统的日志机制研究   总被引:4,自引:0,他引:4  
设备复制系统采用数据复制技术实现数据容灾,而日志机制是整个系统的核心和基础。本文在分析借鉴文件系统和数据库系统日志的基础上,提出了一个满足设备复制系统特点和设计要求的日志机制,并重点围绕日志的组织结构、生成方式、访问方式以及溢出、修正等特殊情况的处理方法进行了论述。原型实验表明,该日志机制在设备复制系统得到了较好实现,为数据复制提供了保障。  相似文献   

11.
王芬  顾乃杰  黄增士 《计算机科学》2017,44(10):165-170
随着互联网的迅速发展,用户从系统获取的信息越来越多,访问系统的频率也在迅速增加。当大量客户端访问系统时,请求的响应时间也会大幅增加,传统关系型数据库已经无法满足用户的需求,而内存数据库在保证系统稳定的前提下,改善了用户体验,并得到了越来越广泛的应用。作为NoSQL内存数据库,Redis支持很多数据类型,适用于多种情况下的缓存与存储需求。文中主要介绍Redis集群,它是Redis的分布式实现,支持主从复制,也具有一定的容错性和线性可扩展性,当前使用Redis集群的网站有新浪微博、github等。虽然 Redis集群 应用广泛,但目前它在节点下线后会出现恢复时间长的现象,这与现有Redis集群的选举算法有关,即与Raft算法的实现有关。分析了Redis集群的可靠性,并优化了集群的选举算法。测试结果显示,在单个主节点下线50s内,优化后的集群都能成功恢复,比社区版本的集群提高了40%。  相似文献   

12.
The emerging edge services architecture promises to improve the availability and performance of Web services by replicating servers at geographically distributed sites. A key challenge in such systems is data replication and consistency, so that edge server code can manipulate shared data without suffering the availability and performance penalties that would be incurred by accessing a traditional centralized database. This work explores using a distributed object architecture to build an edge service data replication system for an e-commerce application, the TPC-W benchmark, which simulates an online bookstore. We take advantage of application-specific semantics to design distributed objects that each manages a specific subset of shared information using simple and effective consistency models. Our experimental results show that by slightly relaxing consistency within individual distributed objects, our application realizes both high availability and excellent performance. For example, in one experiment, we find that our object-based edge server system provides five times better response time over a traditional centralized cluster architecture and a factor of nine improvement over an edge service system that distributes code but retains a centralized database.  相似文献   

13.
As we delve deeper into the ‘Digital Age’, we witness an explosive growth in the volume, velocity, and variety of the data available on the Internet. For example, in 2012 about 2.5 quintillion bytes of data was created on a daily basis that originated from myriad of sources and applications including mobile devices, sensors, individual archives, social networks, Internet of Things, enterprises, cameras, software logs, etc. Such ‘Data Explosions’ has led to one of the most challenging research issues of the current Information and Communication Technology era: how to optimally manage (e.g., store, replicated, filter, and the like) such large amount of data and identify new ways to analyze large amounts of data for unlocking information. It is clear that such large data streams cannot be managed by setting up on-premises enterprise database systems as it leads to a large up-front cost in buying and administering the hardware and software systems. Therefore, next generation data management systems must be deployed on cloud. The cloud computing paradigm provides scalable and elastic resources, such as data and services accessible over the Internet Every Cloud Service Provider must assure that data is efficiently processed and distributed in a way that does not compromise end-users’ Quality of Service (QoS) in terms of data availability, data search delay, data analysis delay, and the like. In the aforementioned perspective, data replication is used in the cloud for improving the performance (e.g., read and write delay) of applications that access data. Through replication a data intensive application or system can achieve high availability, better fault tolerance, and data recovery. In this paper, we survey data management and replication approaches (from 2007 to 2011) that are developed by both industrial and research communities. The focus of the survey is to discuss and characterize the existing approaches of data replication and management that tackle the resource usage and QoS provisioning with different levels of efficiencies. Moreover, the breakdown of both influential expressions (data replication and management) to provide different QoS attributes is deliberated. Furthermore, the performance advantages and disadvantages of data replication and management approaches in the cloud computing environments are analyzed. Open issues and future challenges related to data consistency, scalability, load balancing, processing and placement are also reported.  相似文献   

14.
The ubiquity of the Internet and various intranets has brought about widespread availability of online services and applications accessible through the network. Cluster-based network services have been rapidly emerging due to their cost-effectiveness in achieving high availability and incremental scalability. We present the design and implementation of the Neptune middleware system that provides clustering support and replication management for scalable network services. Neptune employs a loosely connected and functionally symmetric clustering architecture to achieve high scalability and robustness. It shields the clustering complexities from application developers through simple programming interfaces. In addition, Neptune provides replication management with flexible replication consistency support at the clustering middleware level. Such support can be easily applied to a large number of applications with different underlying data management mechanisms or service semantics. The system has been implemented on Linux and Solaris clusters, where a number of applications have been successfully deployed. Our evaluations demonstrate the system performance and smooth failure recovery achieved by proposed techniques.  相似文献   

15.
We address the problem of garbage collection in a single-failure fault-tolerant home-based lazy release consistency (HLRC) distributed shared-memory (DSM) system based on independent checkpointing and logging. Our solution uses laziness in garbage collection and exploits consistency constraints of the HLRC memory model for low overhead and scalability. We prove safe bounds on the state that must be retained in the system to guarantee correct recovery after a failure. We devise two algorithms for garbage collection of checkpoints and logs, checkpoint garbage collection (CGC), and lazy log trimming (LLT). The proposed approach targets large-scale distributed shared-memory computing on local-area clusters of computers. The challenge lies in controlling the size of the logs and the number of checkpoints without global synchronization while tolerating transient disruptions in communication. Evaluation results for real applications show that it effectively bounds the number of past checkpoints to be retained and the size of the logs in stable storage  相似文献   

16.
In this paper, we address the problem of garbage collection in a single-failure fault-tolerant home-based lazy release consistency (HLRC) distributed shared-memory (DSM) system based on independent checkpointing and logging. Our solution uses laziness in garbage collection and exploits consistency constraints of the HLRC memory model for low overhead and scalability. We prove safe bounds on the state that must be retained in the system to guarantee correct recovery after a failure. We devise two algorithms for garbage collection of checkpoints and logs, checkpoint garbage collection (CGC), and lazy log trimming (LLT). The proposed approach targets large-scale distributed shared-memory computing on local-area clusters of computers. In such systems, using global synchronization or extra communication for garbage collection is inefficient or simply impractical due to system scale and temporary disconnections in communication. The challenge lies in controlling the size of the logs and the number of checkpoints without global synchronization while tolerating transient disruptions in communication. Our garbage collection scheme is completely distributed, does not force processes to synchronize, does not add extra messages to the base DSM protocol, and uses only the available DSM protocol information. Evaluation results for real applications show that it effectively bounds the number of past checkpoints to be retained and the size of the logs in stable storage  相似文献   

17.
数据库作为金融信息化建设的重要组成部分, 需要面对持续的业务量增长、高度可用性和扩展性等挑战,而以MySQL、Oracle等为代表的传统数据库单点架构, 在可用性、扩展性和存储能力上已经无法满足当前的金融服务要求. 分布式数据库的出现, 旨在解决单机数据库所面临的各种挑战, 提供更加灵活的架构, 保障系统稳定运行. 为此, 本文在结合实际的金融业务需求下, 研究实现了具有分布式事务支持、分布式SQL引擎、混合事务分析处理等特点的分布式数据库. 系统采用全组件的冗余设计, 通过类Raft增强一致性算法保证了存储层高可用和数据强一致, 同时利用基于Zookeeper的集群调度方案保证调度层的高可用.  相似文献   

18.
On-line transaction processing (OLTP) systems rely on transaction logging and quorum-based consensus protocol to guarantee durability, high availability and strong consistency. This makes the log manager a key component of distributed database management systems (DDBMSs). The leader of DDBMSs commonly adopts a centralized logging method to writing log entries into a stable storage device and uses a constant log replication strategy to periodically synchronize its state to followers. With the advent of new hardware and high parallelism of transaction processing, the traditional centralized design of logging limits scalability, and the constant trigger condition of replication can not always maintain optimal performance under dynamic workloads. In this paper, we propose a new log manager named Salmo with scalable logging and adaptive replication for distributed database systems. The scalable logging eliminates centralized contention by utilizing a highly concurrent data structure and speedy log hole tracking. The kernel of adaptive replication is an adaptive log shipping method, which dynamically adjusts the number of log entries transmitted between leader and followers based on the real-time workload. We implemented and evaluated Salmo in the open-sourced transaction processing systems Cedar and DBx1000. Experimental results show that Salmo scales well by increasing the number of working threads, improves peak throughput by 1.56× and reduces latency by more than 4× over log replication of Raft, and maintains efficient and stable performance under dynamic workloads all the time.  相似文献   

19.
移动大数据环境下,传统基于位置服务LBS技术面临来自系统扩展性、性能等方面的挑战。首先针对LBS应用的特点,提出了基于Storm的查询框架。然后结合基于Storm的LBS查询框架,设计并实现了并行连续范围查询算法,优化查询性能。针对分布式环境中的一致性问题,设计使用基于ZooKeeper的分布式锁服务,保证查询结果的正确性。进一步,针对基于Storm并行连续范围查询算法中存在访问数据库开销较大的问题,提出了基于TimeCacheMap的缓存优化算法及两种缓存策略,减少了访问数据库的开销,提高了查询效率。  相似文献   

20.
State machine replication has been widely used in modern cluster-based database systems. Most commonly deployed configurations adopt the Raft-like consensus protocol, which has a single strong leader which replicates the log to other followers. Since the followers can handle read requests and many real workloads are usually read-intensive, the recovery speed of a crashed follower may significantly impact on the throughput. Different from traditional database recovery, the recovering follower needs to repair its local log first. Original Raft protocol takes many network round trips to do log comparison between leader and the crashed follower. To reduce network round trips, an optimization method is to truncate the follower’s uncertain log entries behind the latest local commit point, and then to directly fetch all committed log entries from the leader in one round trip. However, if the commit point is not persisted, the recovering follower has to get the whole log from the leader. In this paper, we propose an accurate and efficient log repair (AELR) algorithm for follower recovery. AELR is more robust and resilient to follower failure, and it only needs one network round trip to fetch the least number of log entries for follower recovery. This approach is implemented in the open source database system OceanBase. We experimentally show that the system adopting AELR has a good performance in terms of recovery time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号