首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Chunking is a process to split a file into smaller files called chunks. In some applications, such as remote data compression, data synchronization, and data deduplication, chunking is important because it determines the duplicate detection performance of the system. Content-defined chunking (CDC) is a method to split files into variable length chunks, where the cut points are defined by some internal features of the files. Unlike fixed-length chunks, variable-length chunks are more resistant to byte shifting. Thus, it increases the probability of finding duplicate chunks within a file and between files. However, CDC algorithms require additional computation to find the cut points which might be computationally expensive for some applications. In our previous work (Widodo et al., 2016), the hash-based CDC algorithm used in the system took more process time than other processes in the deduplication system. This paper proposes a high throughput hash-less chunking method called Rapid Asymmetric Maximum (RAM). Instead of using hashes, RAM uses bytes value to declare the cut points. The algorithm utilizes a fix-sized window and a variable-sized window to find a maximum-valued byte which is the cut point. The maximum-valued byte is included in the chunk and located at the boundary of the chunk. This configuration allows RAM to do fewer comparisons while retaining the CDC property. We compared RAM with existing hash-based and hash-less deduplication systems. The experimental results show that our proposed algorithm has higher throughput and bytes saved per second compared to other chunking algorithms.  相似文献   

2.
基于XOR Hash的快速IP数据包分类算法研究   总被引:1,自引:0,他引:1  
文章在哈希算法的基础上,提出了一种基于异或哈希的IP分类算法,该算法的核心有三点:一是将目的/源IP、目的/源端口和协议五域连成比特串,然后分为五块后进行异或,获得分类关键值;二是为了降低冲突率,将异或后的关键值再与一个随机数进行异或,获得最终分类索引值;三是为了保证查找到的规则的正确性,对每一个索引值的源/目的IP地址均匹配一次。通过以上三点改进一般会降低算法的时间复杂度和空间复杂度,通过仿真,当对1万条分类规则进行包分类时,该算法的包分类速度可以达到2Mpps,所消耗的最大内存为6MB。  相似文献   

3.
Data deduplication for file communication across wide area network (WAN) in the applications such as file synchronization and mirroring of cloud environments usually achieves significant bandwidth saving at the cost of significant time overheads of data deduplication. The time overheads include the time required for data deduplication at two geographi-cally distributed nodes (e.g., disk access bottleneck) and the duplication query/answer operations between the sender and the receiver, since each query or answer introduces at least one round-trip time (RTT) of latency. In this paper, we present a data deduplication system across WAN with metadata feedback and metadata utilization (MFMU), in order to harness the data deduplication related time overheads. In the proposed MFMU system, selective metadata feedbacks from the receiver to the sender are introduced to reduce the number of duplication query/answer operations. In addition, to harness the metadata related disk I/O operations at the receiver, as well as the bandwidth overhead introduced by the metadata feedbacks, a hysteresis hash re-chunking mechanism based metadata utilization component is introduced. Our experimental results demonstrated that MFMU achieved an average of 20%~40% deduplication acceleration with the bandwidth saving ratio not reduced by the metadata feedbacks, as compared with the “baseline” content defined chunking (CDC) used in LBFS (Low-bandwith Network File system) and exiting state-of-the-art Bimodal chunking algorithms based data deduplication solutions.  相似文献   

4.
With the explosive growth of data, storage systems are facing huge storage pressure due to a mass of redundant data caused by the duplicate copies or regions of files. Data deduplication is a storage-optimization technique that reduces the data footprint by eliminating multiple copies of redundant data and storing only unique data. The basis of data deduplication is duplicate data detection techniques, which divide files into a number of parts, compare corresponding parts between files via hash techniques and find out redundant data. This paper proposes an efficient sliding blocking algorithm with backtracking sub-blocks called SBBS for duplicate data detection. SBBS improves the duplicate data detection precision of the traditional sliding blocking (SB) algorithm via backtracking the left/right 1/4 and 1/2 sub-blocks in matching-failed segments. Experimental results show that SBBS averagely improves the duplicate detection precision by 6.5% compared with the traditional SB algorithm and by 16.5% compared with content-defined chunking (CDC) algorithm, and it does not increase much extra storage overhead when SBBS divides the files into equal chunks of size 8 kB.  相似文献   

5.
基于多层过滤的统计机器翻译   总被引:1,自引:0,他引:1  
本文提出了一种基于多层过滤的算法。该算法主要实现从对齐的中英文句子中自动的抽取与对齐双语语块。根据不同语块具备的不同特性,采用不同的层次对其处理。该算法不同于传统的算法,它不需要对句子进行标注,句法分析,词法分析甚至不需要对汉语句子进行分词等操作。初步的实验结果表明该算法性能较好,测试的结果是:抽取语块的准确率能达到F = 0170 ,对齐语块的准确率能达到F = 0180 ;而且将此算法获得的对齐双语语块用于统计机器翻译系统,跟基于词的系统做对比,结果表明基于语块的翻译系统明显提高了翻译水平,差不多能提高10 %。  相似文献   

6.
邓艺夫  胡振 《现代计算机》2006,(7):100-102,112
如何确保数据文件的保密性和完整性?这是应用程序开发与应用中的一个重要问题.在.NETFrameWork中,有多种算法可以实现文件的加密和解密,亦可用散列函数来验证文件的完整性.本文提出并实现了在.NET FrameWork中开发应用程序时,结合使用DES加密算法和SHA1散列函数,实现了文件加密并验证其完整性的方法.  相似文献   

7.
Hadoop分布式文件系统(HDFS)通常用于大文件的存储和管理,当进行海量小文件的存储和计算时,会消耗大量的NameNode内存和访问时间,成为制约HDFS性能的一个重要因素.针对多模态医疗数据中海量小文件问题,提出一种基于双层哈希编码和HBase的海量小文件存储优化方法.在小文件合并时,使用可扩展哈希函数构建索引文件存储桶,使索引文件可以根据需要进行动态扩展,实现文件追加功能.在每个存储桶中,使用MWHC哈希函数存储每个文件索引信息在索引文件中的位置,当访问文件时,无须读取所有文件的索引信息,只需读取相应存储桶中的索引信息即可,从而能够在O(1)的时间复杂度内读取文件,提高文件查找效率.为了满足多模态医疗数据的存储需求,使用HBase存储文件索引信息,并设置标识列用于标识不同模态的医疗数据,便于对不同模态数据的存储管理,并提高文件的读取速度.为了进一步优化存储性能,建立了基于LRU的元数据预取机制,并采用LZ4压缩算法对合并文件进行压缩存储.通过对比文件存取性能、NameNode内存使用率,实验结果表明,所提出的算法与原始HDFS、HAR、MapFile、TypeStorage以及...  相似文献   

8.
王银涛  高媛 《计算机工程》2012,38(23):79-83,87
为高效地在机会网络中进行文件(音、视频)传输,提出一种基于节点性质特征的编码与效用值混合的路由算法UH-EC。将源文件编码成较小数据块,在节点的下一跳转发选取上采取基于节点特征的效用值,不断寻求转发能力强的节点承担转发任务,直到数据转发到目的节点。理论分析与仿真结果证明,与经典的H-EC路由算法相比,该算法能有效降低网络开销、分组端到端时延与黑洞节点对文件传输成功率的影响。  相似文献   

9.
An advantage of peer-to-peer applications is that files can be shared without concentrated load on file servers. This note proposes deterministic techniques for splitting a files into segments so that, for certain restricted cases of arrival and server rates, users may copy files from one another without fetching the data from file servers.  相似文献   

10.
基于并行B+-树的并行Join算法的设计、分析与实现   总被引:1,自引:0,他引:1  
B^+-树是一种有效的数据库存储结构,被普遍应用于各种关系数据库系统。把B^+-树并行化,使之用于并行数据库系统显然是一项很有意义的重要工作。本文研究了适用于并行数据库的并行B^+-树存储结构,提出两类基于并行B^+-树工并行Join算法。理论和实验结果表明,这些算法效率高基其它并行Join算法。  相似文献   

11.
日志结构文件系统技术的研究   总被引:1,自引:0,他引:1  
介绍了一种新的磁盘管理技术-日志结构文件系统,把对文件的修改汇成日志条目顺序地写入磁盘,既加速了写文件的速度,又加速了崩溃恢复的速度,把整个磁盘作为日志,磁盘上包含有效的读取日志结构文件所需的索引信息,为了保持快速写所需的大的磁盘空闲快,将磁盘分成段,用一个清理工线程收集压缩分散到各个段中的有效信息,以一个日志结构文件系统的原型,即Sprite LFS为例,对日志文件系统设计和实现的各个阶段进行了分析,并与Unix的文件系统,即快速文件系统(FFS)进行了比较。  相似文献   

12.
针对流密码在文件加密时存在的一些实际问题,例如占用过多内存、加密文件大小受到内存大小限制等,提出将滑动窗口协议中的一些思想应用到流密码文件加密中来,将流密码加密中文件加密和文件写入两个模块拆分开来,分别视为发送方和接收方,利用该协议思想使两模块协同工作,在一个称为窗口的缓存区中分别对数据进行读取加密和写入保存的操作,以提高加密程序的性能。  相似文献   

13.
Soar is an architecture for a system that is intended to be capable of general intelligence. Chunking, a simple experience-based learning mechanism, is Soar's only learning mechanism. Chunking creates new items of information, called chunks, based on the results of problem-solving and stores them in the knowledge base. These chunks are accessed and used in appropriate later situations to avoid the problem-solving required to determine them. It is already well-established that chunking improves performance in Soar when viewed in terms of the subproblems required and the number of steps within a subproblem. However, despite the reduction in number of steps, sometimes there may be a severe degradation in the total run time. This problem arises due to expensive chunks, i.e., chunks that require a large amount of effort in accessing them from the knowledge base. They pose a major problem for Soar, since in their presence, no guarantees can be given about Soar's performance.In this article, we establish that expensive chunks exist and analyze their causes. We use this analysis to propose a solution for expensive chunks. The solution is based on the notion of restricting the expressiveness of the representational language to guarantee that the chunks formed will require only a limited amount of accessing effort. We analyze the tradeoffs involved in restricting expressiveness and present some empirical evidence to support our analysis.  相似文献   

14.
We consider a two-tier content distribution system for distributing massive content, consisting of an infrastructure content distribution network (CDN) and a large number of ordinary clients. The nodes of the infrastructure network form a structured, distributed-hash-table-based (DHT) peer-to-peer (P2P) network. Each file is first placed in the CDN, and possibly, is replicated among the infrastructure nodes depending on its popularity. In such a system, it is particularly pressing to have proper load-balancing mechanisms to relieve server or network overload. The subject of the paper is on popularity-based file replication techniques within the CDN using multiple hash functions. Our strategy is to set aside a large number of hash functions. When the demand for a file exceeds the overall capacity of the current servers, a previously unused hash function is used to obtain a new node ID where the file will be replicated. The central problems are how to choose an unused hash function when replicating a file and how to choose a used hash function when requesting the file. Our solution to the file replication problem is to choose the unused hash function with the smallest index, and our solution to the file request problem is to choose a used hash function uniformly at random. Our main contribution is that we have developed a set of distributed, robust algorithms to implement the above solutions and we have evaluated their performance. In particular, we have analyzed a random binary search algorithm for file request and a random gap removal algorithm for failure recovery.  相似文献   

15.
In recent years, grid technology has had such a fast growth that it has been used in many scientific experiments and research centers. A large number of storage elements and computational resources are combined to generate a grid which gives us shared access to extra computing power. In particular, data grid deals with data intensive applications and provides intensive resources across widely distributed communities. Data replication is an efficient way for distributing replicas among the data grids, making it possible to access similar data in different locations of the data grid. Replication reduces data access time and improves the performance of the system. In this paper, we propose a new dynamic data replication algorithm named PDDRA that optimizes the traditional algorithms. Our proposed algorithm is based on an assumption: members in a VO (Virtual Organization) have similar interests in files. Based on this assumption and also file access history, PDDRA predicts future needs of grid sites and pre-fetches a sequence of files to the requester grid site, so the next time that this site needs a file, it will be locally available. This will considerably reduce access latency, response time and bandwidth consumption. PDDRA consists of three phases: storing file access patterns, requesting a file and performing replication and pre-fetching and replacement. The algorithm was tested using a grid simulator, OptorSim developed by European Data Grid projects. The simulation results show that our proposed algorithm has better performance in comparison with other algorithms in terms of job execution time, effective network usage, total number of replications, hit ratio and percentage of storage filled.  相似文献   

16.
调用.NET Framework提供的安全类命名空间,在VB.NET下,开发了一个能够计算任意文件的MD5哈希值的程序。经测试,在计算超过200MB以上的大文件时,也能够达到较快的速度。计算结果可保存在同名文本文件中,可应用于软件版权保护。  相似文献   

17.
Nowadays, the rapid development of the internet calls for a high performance file system, and a lot of efforts have already been devoted to the issue of assigning nonpartitioned files in a parallel file system with the aim of pursuing a prompt response to requests. Yet most of the existing strategies still fail to bring about an optimal performance on system mean response time metrics, and new strategies which can achieve better performance in terms of mean response time become indispensable for parallel file systems. This paper, while addressing the issue of assigning nonpartitioned files in parallel file systems where the file accesses exhibit Poisson arrival rates and fixed service times, presents an on-line file assignment strategy, named prediction-based dynamic file assignment (PDFA), to minimize the mean response time among disks under different workload conditions, and a comparison of the PDFA with the well-known file assignment algorithms, such as HP and SOR. Comprehensive experimental results show that PDFA is able to improve the performance consistently in terms of mean response time among all algorithms for comparison.  相似文献   

18.
基于SVM的中文组块分析   总被引:20,自引:5,他引:20  
基于SVM(support vector machine)理论的分类算法,由于其完善的理论基础和良好的实验结果,目前已逐渐引起国内外研究者的关注。和其他分类算法相比,基于结构风险最小化原则的SVM在小样本模式识别中表现较好的泛化能力。文本组块分析作为句法分析的预处理阶段,通过将文本划分成一组互不重叠的片断,来达到降低句法分析的难度。本文将中文组块识别问题看成分类问题,并利用SVM加以解决。实验结果证明,SVM算法在汉语组块识别方面是有效的,在哈尔滨工业大学树库语料测试的结果是F=88.67%,并且特别适用于有限的汉语带标信息的情况。  相似文献   

19.
One of the main trends in the modern anti-virus industry is the development of algorithms that help estimate the similarity of files. Since malware writers tend to use increasingly complex techniques to protect their code such as obfuscation and polymorphism, anti-virus software vendors face problems of the increasing difficulty of file scanning, the considerable growth of anti-virus databases, and file storages overgrowth. For solving such problems, a static analysis of files appears to be of some interest. Its use helps determine those file characteristics that are necessary for their comparison without executing malware samples within a protected environment. The solution provided in this article is based on the assumption that different samples of the same malicious program have a similar order of code and data areas. Each such file area may be characterized not only by its length, but also by its homogeneity. In other words, the file may be characterized by the complexity of its data order. Our approach consists of using wavelet analysis for the segmentation of files into segments of different entropy levels and using edit distance between sequence segments to determine the similarity of the files. The proposed solution has a number of advantages that help detect malicious programs efficiently on personal computers. First, this comparison does not take into account the functionality of analysed files and is based solely on determining the similarity in code and data area positions which makes the algorithm effective against many ways of protecting executable code. On the other hand, such a comparison may result in false alarms. Therefore, our solution is useful as a preliminary test that triggers the running of additional checks. Second, the method is relatively easy to implement and does not require code disassembly or emulation. And, third, the method makes the malicious file record compact which is significant when compiling anti-virus databases.  相似文献   

20.
In many e-science applications, there exists an important need to aggregate information from data repositories distributed around the world. In an effort to better link these resources in a unified manner, many lambda-grid networks, which provide end-to-end dedicated optical-circuit-switched connections, have been investigated. In this context, we consider the problem of aggregating files from distributed databases at a (grid) computing node over a lambda grid. The challenge is (1) to identify routes (that is, circuits) in the lambda-grid network, along which files should be transmitted, and (2) to schedule the transfers of these files over their respective circuits. To address this challenge, we propose a hybrid approach that combines offline and online scheduling. We define the Time-Path Scheduling Problem (TPSP) for offline scheduling. We prove that TPSP is NP-complete, develop a Mixed Integer Linear Program (MILP) formulation for TPSP, and then propose a greedy approach to solve TPSP because the MILP does not scale well. We compare the performance of the greedy approach on a few representative lambda-grid network topologies. One key input to the offline schedule is the file transfer time. Due to dynamics at the receiving end host, which is hard to model precisely, the actual file transfer time may vary. We first propose a model for estimating the file transfer time. Then, we propose online reconfiguration algorithms so that as files are transferred, the offline schedule may be modified online, depending on the amount of time that it actually took to transfer the file. This helps in reducing the total time to transfer all the files, which is an important metric. To demonstrate the effectiveness of our approach, we present results on an emulated lambda-grid network testbed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号