首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到6条相似文献,搜索用时 0 毫秒
1.
Chunking is a process to split a file into smaller files called chunks. In some applications, such as remote data compression, data synchronization, and data deduplication, chunking is important because it determines the duplicate detection performance of the system. Content-defined chunking (CDC) is a method to split files into variable length chunks, where the cut points are defined by some internal features of the files. Unlike fixed-length chunks, variable-length chunks are more resistant to byte shifting. Thus, it increases the probability of finding duplicate chunks within a file and between files. However, CDC algorithms require additional computation to find the cut points which might be computationally expensive for some applications. In our previous work (Widodo et al., 2016), the hash-based CDC algorithm used in the system took more process time than other processes in the deduplication system. This paper proposes a high throughput hash-less chunking method called Rapid Asymmetric Maximum (RAM). Instead of using hashes, RAM uses bytes value to declare the cut points. The algorithm utilizes a fix-sized window and a variable-sized window to find a maximum-valued byte which is the cut point. The maximum-valued byte is included in the chunk and located at the boundary of the chunk. This configuration allows RAM to do fewer comparisons while retaining the CDC property. We compared RAM with existing hash-based and hash-less deduplication systems. The experimental results show that our proposed algorithm has higher throughput and bytes saved per second compared to other chunking algorithms.  相似文献   

2.
现有的所有权证明去重方案容易遭受诚实但好奇服务器的威胁影响,借助可信第三方解决该问题将导致开销过大。基于动态bloom filter提出一种改进的、无须可信第三方的所有权证明安全去重方案,采用收敛加密算法抵抗诚实但好奇的服务器,并通过服务器检查数据块密文和标签的一致性来防止数据污染攻击。此外,采用密钥链机制对收敛密钥进行管理,解决了现有方案中收敛密钥占用过多存储空间的问题。分析与比较表明,该方案具有较小的密钥存储开销和传输开销。  相似文献   

3.
In a massive IoT systems, large amount of data are collected and stored in clouds, edge devices, and terminals, but the data are mostly isolated. For many new demands of various intelligent applications, self-organized collaborated learning on those data to achieve group decisions has been a new trend. However, in order to reach the goal of group decisions, trust problems on data fusion and model fusion should be solved since the participants may not be trusted. We propose a consistent and trust fusion method with the consortium chain to reach a consensus, and complete the self-organized trusted decentralized collaborated learning. In each consensus process, consensus candidates check others’ trust levels to ensure that they tends to fuse consensus with users with high trust, where the trust levels are evaluated by scores according to their historical behaviors in the past consensus process and stored in the public ledger of blockchain. A trust rewards and punishments method is designed to realize trust incentive consensus, the candidates with higher trust levels have more rights and reputation in the consensus. Simulation results and security analysis show that the method can effectively defend malicious users and data, improve the trust perception performance of the whole federated learning network, and make the federated learning more trusted and stable.  相似文献   

4.
为了降低遗传算法中连续编码个体对于存储空间的耗费,提出了一种能处理连续编码优化问题的改进CGA算法.通过建立有效的二维概率向量描述连续个体,并且推导相应的概率向量更新规则和初始值取值来构筑算法模型.连续型CGA算法将概率进化的思想应用到连续编码个体,克服了开辟大量存储空间保存个体信息的不足.仿真实验对比分析连续型CGA算法和一般遗传算法(Simple Genetic Algorithm,SGA)在处理连续问题上的性能,结果证明了该算法很好的达到了一般遗传算法的性能,同时有效减少了存储空间的耗费,并且对于算法中止条件的判断也要强于一般遗传算法.  相似文献   

5.
This paper concerns the hardness of approximating the closest vector in a lattice with preprocessing in l1norm,and gives a polynomial time algorithm for GapCVPPγin l1norm with gapγ=O(n/log n).The gap is smaller than that obtained by simply generalizing the approach given by Aharonov and Regev.The main technical ingredient used in this paper is the discrete Laplace distribution on lattices which may be of independent interest.  相似文献   

6.
In this paper a 3D elastic model for the segmentation of vector fields has been proposed and analyzed. Elastic models for segmentation usually involve minimization of internal and external energy. A problem we observed with standard internal and external energy is that the local or the global reached minima do not force the external energy to be zero. To eliminate this difficulty, we propose introducing a constraint. The constraint problem is proved to be mathematically well posed, and a simple algorithm which avoids computing the lagrange multiplier is provided. This algorithm is proved to be convergent. Then the algorithm is applied to the segmentation of cardiac magnetic resonance imaging, and its efficiency is shown with two experiments. Martine Picq is member of the Institute of Mathematics C. Jordan in National Institute of Applied Sciences in Lyon, where she is teaching mathematics since 1997. Jerome Pousin received a Ph.D. degree in Applied Mathematics from University of Paris 6 France in 1983 and Ph.D. degree in Mathematic Sciences from EPFL Switzerland in 1992. Since 1993 he is professor of Mathematics at the National Institute of Applied Sciences in Lyon. His research interests are approximation of nonlinear Partial Differential Equations with Finite Element Method; domain decomposition methods and image segmentation with deformable models. Youssef Rouchdy received a Ph.D. degree in Applied Mathematics from the National Institute of Applied Sciences in Lyon in 2005. He is currently a Postdoc at INRIA Sophia Antipolis France.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号