首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 50 毫秒
1.
针对传统远程备份中大量冗余数据导致备份效率低下和存储空间浪费的问题,设计并实现了一个基于重复数据删除的远程备份系统。首先根据文件的内容用Rabin指纹将备份文件划分为变长的数据块,把每个数据块的相关信息发送到备份中心,在备份中心利用Google Bigtable及Leveldb的索引算法辅以布隆过滤器对数据块进行判重,最后只传输和存储不重复的数据块。实验结果表明,采用该系统备份相似的数据集能够有效删除其中的重复数据。对数据集进行增量备份,在增量数据变化不大时,相比Rsync备份有更少的网络流量。  相似文献   

2.
重复数据删除技术是现在存储领域广泛应用的一种数据缩减技术.重复数据预测技术能够在执行重复数据删除之前,让用户了解系统的效用,为用户如何使用存储系统提供参考.当前,重复数据预测技术不断发展,并已经有企业将其广泛应用.现有的几种重复数据删除预测技术都拥有了很高的准确性和很好的应用环境,一种基于应用感知的重复数据预测技术能够进一步减小预测索引表的大小,进一步地提升了预测算法的性能.索引表的自适应更新算法能够将来访数据内部的冗余度考虑在内,进一步提高了重复数据预测的准确性.  相似文献   

3.
随着重复数据删除技术应用的普及,性能已成为影响其应用效果的核心要素.已有研究提出了基于二级索引结构的重复数据删除模型以提升系统读写性能,但没有对模型中一些参数的选择进行量化分析.对基于二级索引结构的重复数据删除模型中块大小等一些性能相关的参数进行了分析研究,设计了相关实验,对这些参数与读写性能的关系进行了量化描述,对本类模型在实际环境中的应用有很好的指导意义,同时为下一步性能优化工作提供了重要的数据基础.  相似文献   

4.
数据存储优化一直是众多系统管理员关心的问题,如何在有限的空间存储更多的数据,并保证数据的有效性和完整性,本文就这些问题对Windows Server2012下的重复数据删除功能进行一番体验。  相似文献   

5.
马建庭  杨频 《计算机工程与设计》2011,32(11):3586-3589,3617
针对在文件备份服务器中存在大量重复数据的问题,设计了一种基于重复数据删除的文件备份系统,为位于不同地理位置的多个用户提供备份服务。该系统不仅删除用户内部的重复数据,也删除不同用户之间的重复数据,进一步节省存储空间,与此同时,采用了一定的安全机制来防止数据丢失以及用户数据信息泄漏。实验结果表明了系统的可行性,为多用户构建统一的备份中心提供了一种新的解决方案。  相似文献   

6.
重复数据删除技术   总被引:2,自引:0,他引:2       下载免费PDF全文
敖莉  舒继武  李明强 《软件学报》2010,21(5):916-929
重复数据删除技术主要分为两类:相同数据的检测技术和相似数据的检测与编码技术,系统地总结了这两类技术,并分析了其优缺点.此外,由于重复数据删除技术会影响存储系统的可靠性和性能,又总结了针对这两方面的问题提出的各种技术.通过对重复数据删除技术当前研究现状的分析,得出如下结论:a) 重复数据删除中的数据特性挖掘问题还未得到完全解决,如何利用数据特征信息有效地消除重复数据还需要更深入的研究;b) 从存储系统设计的角度,如何引入恰当的机制打破重复数据删除技术的可靠性局限并减少重复数据删除技术带来的额外系统开销也是一个需要深入研究的方面.  相似文献   

7.
《中国信息化》2012,(2):68-68
惠普近日宣布推出业内第一个提供全自动高可用性的大型重复数据删除设备。惠普B6200Store Once备份系统实现了创纪录的性能,其数据处理和存储速度达每小时28TB,超出竞争对手三倍以上。如今企业正面临着数据总量和多样性的双重挑战,其中既有结构化数据,也有人们在即时沟通时产生的思想和理念等非结构化  相似文献   

8.
重复数据删除技术   总被引:12,自引:2,他引:12  
敖莉  舒继武  李明强 《软件学报》2010,21(4):916-929
重复数据删除技术主要分为两类:相同数据的检测技术和相似数据的检测与编码技术,系统地总结了 这两类技术,并分析了其优缺点.此外,由于重复数据删除技术会影响存储系统的可靠性和性能,又总结了针对这 两方面的问题提出的各种技术.通过对重复数据删除技术当前研究现状的分析,得出如下结论:a) 重复数据删除 中的数据特性挖掘问题还未得到完全解决,如何利用数据特征信息有效地消除重复数据还需要更深入的研 究;b) 从存储系统设计的角度,如何引入恰当的机制打破重复数据删除技术的可靠性局限并减少重复数据删除技术带来的额外系统开销也是一个需要深入研究的方面.  相似文献   

9.
张沪寅  周景才  陈毅波  查文亮 《软件学报》2015,26(10):2581-2595
通过大量的实验分析发现:在云桌面场景下,数据拥有者之间的工作相关度越大,则该用户之间存在重复数据的概率越大.基于该实验结果,提出了用户感知的重复数据删除算法.该算法打破了数据空间局部性特征的限制,实现了以用户为单位的更粗粒度的查重计算,可以在不影响重删率的前提下,减少5~10倍常驻内存指纹的数量,并可将每次查重计算的指纹检索范围控制在一个常数范围内,不随数据总量的增加而线性增加,从而有效避免了因为数据总量增加而导致内存不足的问题.除此之外,该算法还能根据存储系统的负载情况自动调整重复指纹检索范围,在性能与重删率之间加以平衡,从而更好地满足主存储场景的需要.原型验证表明,该算法可以很好地解决云计算场景下海量数据的重复数据删除性能问题.与OpenDedup算法相比,当数据指纹总量超出内存可用空间时,该算法可以表现出巨大的优势,减少200%以上的读磁盘操作,响应速度提升3倍以上.  相似文献   

10.
指纹查找部分是I/O密集型工作负载,即外存存储设备的性能是指纹查找的性能瓶颈.因此关注重复数据删除系统的指纹查找部分,对比了传统的勤奋指纹查找算法和致力于减少磁盘访问次数的懒惰指纹查找算法,分析了2种方法在傲腾固态硬盘(Optane solid state drive, Optane SSD)和持久性内存(persistent memory, PM)两种新型存储设备上的性能表现,并给出了优化建议.对勤奋指纹查找算法和懒惰指纹查找算法的时间进行建模,分析得出了指纹查找算法在新型存储设备下的3点优化结论:1)应减少统一查找的指纹数;2)在较快设备上应减少懒惰指纹查找中局部性环的大小,并且局部性环大小存在一个最优值;3)在快速设备上,勤奋指纹查找的效果要优于懒惰指纹查找.最终,在实际机械硬盘(hard disk drive, HDD)、Optane SSD和PM模拟器上实验验证了模型的正确性.实验结果显示,快速设备上指纹查找的时间相较于HDD减少90%以上,并且采用勤奋算法要优于懒惰算法,局部性环最优值前移的现象,也与模型理论优化结果吻合.  相似文献   

11.
使用二级索引的中文分词词典   总被引:3,自引:0,他引:3       下载免费PDF全文
中文分词是中文信息处理的基础,在诸如搜索引擎,自动翻译等多个领域都有着非常重要的地位。中文分词词典是中文机械式分词算法的基础,它将告诉算法什么是词,由于在算法执行过程中需要反复利用分词词典的内容进行字符串匹配,所以中文分词词典的存储结构从很大程度上决定将采用什么匹配算法以及匹配算法的好坏。在研究现存分词词典及匹配算法的基础上,吸取前人的经验经过改进,为词典加上了多级索引,并由此提出了一种新的中文分词词典存储机制——基于二级索引的中文分词词典,并在该词典的基础上提出了基于正向匹配的改进型匹配算法,大大降低了匹配过程的时间复杂度。从而提高了整个中文分词算法的分词速度。  相似文献   

12.
Data deduplication has been widely utilized in large-scale storage systems, particularly backup systems. Data deduplication systems typically divide data streams into chunks and identify redundant chunks by comparing chunk fingerprints. Maintaining all fingerprints in memory is not cost-effective because fingerprint indexes are typically very large. Many data deduplication systems maintain a fingerprint cache in memory and exploit fingerprint prefetching to accelerate the deduplication process. Although fingerprint prefetching can improve the performance of data deduplication systems by leveraging the locality of workloads, inaccurately prefetched fingerprints may pollute the cache by evicting useful fingerprints. We observed that most of the prefetched fingerprints in a wide variety of applications are never used or used only once, which severely limits the performance of data deduplication systems. We introduce a prefetch-aware fingerprint cache management scheme for data deduplication systems (PreCache) to alleviate prefetch-related cache pollution. We propose three prefetch-aware fingerprint cache replacement policies (PreCache-UNU, PreCache-UOO, and PreCache-MIX) to handle different types of cache pollution. Additionally, we propose an adaptive policy selector to select suitable policies for prefetch requests. We implement PreCache on two representative data deduplication systems (Block Locality Caching and SiLo) and evaluate its performance utilizing three real-world workloads (Kernel, MacOS, and Homes). The experimental results reveal that PreCache improves deduplication throughput by up to 32.22% based on a reduction of on-disk fingerprint index lookups and improvement of the deduplication ratio by mitigating prefetch-related fingerprint cache pollution.  相似文献   

13.
Deduplication technology has been increasingly used to reduce storage costs. Though it has been successfully applied to backup and archival systems, existing techniques can hardly be deployed in primary storage systems due to the associated latency cost of detecting duplicated data, where every unit has to be checked against a substantially large fingerprint index before it is written. In this paper we introduce Leach, for inline primary storage, a self-learning in-memory fingerprints cache to reduce the writing cost in deduplication system. Leach is motivated by the characteristics of real-world I/O workloads: highly data skew exist in the access patterns of duplicated data. Leach adopts a splay tree to organize the on-disk fingerprint index, automatically learns the access patterns and maintains hot working sets in cachememory, with a goal to service a majority of duplicated data detection. Leveraging the working set property, Leach provides optimization to reduce the cost of splay operations on the fingerprint index and cache updates. In comprehensive experiments on several real-world datasets, Leach outperforms conventional LRU (least recently used) cache policy by reducing the number of cache misses, and significantly improves write performance without great impact to cache hits.  相似文献   

14.
Chunking is a process to split a file into smaller files called chunks. In some applications, such as remote data compression, data synchronization, and data deduplication, chunking is important because it determines the duplicate detection performance of the system. Content-defined chunking (CDC) is a method to split files into variable length chunks, where the cut points are defined by some internal features of the files. Unlike fixed-length chunks, variable-length chunks are more resistant to byte shifting. Thus, it increases the probability of finding duplicate chunks within a file and between files. However, CDC algorithms require additional computation to find the cut points which might be computationally expensive for some applications. In our previous work (Widodo et al., 2016), the hash-based CDC algorithm used in the system took more process time than other processes in the deduplication system. This paper proposes a high throughput hash-less chunking method called Rapid Asymmetric Maximum (RAM). Instead of using hashes, RAM uses bytes value to declare the cut points. The algorithm utilizes a fix-sized window and a variable-sized window to find a maximum-valued byte which is the cut point. The maximum-valued byte is included in the chunk and located at the boundary of the chunk. This configuration allows RAM to do fewer comparisons while retaining the CDC property. We compared RAM with existing hash-based and hash-less deduplication systems. The experimental results show that our proposed algorithm has higher throughput and bytes saved per second compared to other chunking algorithms.  相似文献   

15.
《国际计算机数学杂志》2012,89(5):1054-1060
Vendor–retailer collaboration has an important role in supply chain management. Although vendor–retailer collaboration results in better supply chain profit, collaboration is difficult to realize. This is because most vendors and retailers try to optimize their own profit. This paper applies the Stackelberg game with stochastic demand for the vendor–retailer system. The vendor as a leader determines the product price, and the retailer decides order quantity and frequency of price markdown. This study develops example and sensitivity analyses to illustrate the theory. Results show that the price markdown option has a better total supply chain profit than without a price markdown policy, and the vendor receives more benefit. For different demand variances, the retailer profit is more sensitive than the vendor profit.  相似文献   

16.
基于偏微分方程(PDE)的图像修复和基于纹理合成的图像修复是目前数字图像修复中的重要方法,虽然均能较好地修复图像,但是修复的效率较低。提出了一种采用域相似修复图像的新算法,先对待修复区域边界上的所有待修复点计算优先级,然后按照优先级从大到小的顺序修复图像;该算法以像素点邻域的相似来衡量两个像素点相似的程度,充分考虑了待修复像素的邻域中已知信息对该像素的影响。仿真实验结果表明,该算法不仅能较好地修复图像,而且在同等修复区域和修复效果的条件下具有更高的修复效率。  相似文献   

17.
讨论下层规划问题以最优值反应到上层的二层规划问题的数值解法,其中目标函数和约束函数均为Lipschitz连续函数,构造了二层规划问题目标函数的区间扩张和无解区域删除检验原则,建立了求解二层规划问题的区间算法,并进行了数值实验。理论证明和数值实验均表明算法是可靠和有效的。  相似文献   

18.
BLISS/S: a new method for two-level structural optimization   总被引:2,自引:2,他引:2  
The paper describes a two-level method for structural optimization for a minimum weight under the local strength and displacement constraints. The method divides the optimization task into separate optimizations of the individual substructures (in the extreme, the individual components) coordinated by the assembled structure optimization. The substructure optimizations use local cross-sections as design variables and satisfy the highly nonlinear local constraints of strength and buckling. The design variables in the assembled structure optimization govern the structure overall shape and handle the displacement constraints. The assembled structure objective function is the objective in each of the above optimizations. The substructure optimizations are linked to the assembled structure optimization by the sensitivity derivatives. The method was derived from a previously reported two-level optimization method for engineering systems, e.g. aerospace vehicles, that comprise interacting modules to be optimized independently, coordination provided by a system-level optimization. This scheme was adapted to structural optimization by treating each substructure as a module in a system, and using the standard finite element analysis as the system analysis. A numerical example, a hub structure framework, is provided to show the new method agreement with a standard, no-decomposition optimization. The new method advantage lies primarily in the autonomy of the individual substructure optimization that enables concurrency of execution to compress the overall task elapsed time. The advantage increases with the magnitude of that task. Received December 5, 1999?Revised mansucript received April 26, 2000  相似文献   

19.
一种相似汉字的识别算法   总被引:7,自引:5,他引:7  
本文提出了一种通用的基于部分空间方法的相似汉字识别算法, 该算法无须事先确定相似字组, 也不必人工选择各个相似字组的部分空间, 能够自动决定待识别字是否需要进入相似字识别过程, 以及怎样选择部分空间。实验结果证明了本算法的有效性。  相似文献   

20.
针对当前广泛应用的ext3文件系统对超过一定长度的目录进行索引操作时,其性能明显下降的现象,首先对其原因进行了分析,提出一种基于hash技术的ext3目录索引问题的解决方案,并在此基础上给出了实现代码。通过几种测试平台所获得的实验数据证明了该hash技术对解决ext3性能瓶颈的有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号