共查询到20条相似文献,搜索用时 62 毫秒
1.
分布式数据存储过程中的元数据保存在中心节点上,容易造成单点故障和易被恶意修改,安全性较差。虽然,通过引入备份节点可以一定程度上避免该问题,但节点之间的同步和切换效率较低。同时,存储元数据的节点可以达成共识修改元数据,缺乏可信性。针对传统分布式存储中存在的问题,结合区块链的特点,提出一种去中心化的分布式存储模型DMB(Decentralized Metadata Blockchain),通过将元数据保存在区块中、冗余存储区块链、协作验证来保证元数据的完整性。模型分为两个阶段,即元数据存储阶段和元数据验证阶段。在元数据存储阶段,将用户的签名和副本位置数据发送给若干验证节点,生成元数据区块并写入元数据区块链中。在元数据验证阶段,验证节点首先检查本地元数据区块链的状态和全局状态是否相同,如果不相同则进行状态同步。然后,检索本地元数据区块链来验证元数据完整性。理论与实验结果表明,DMB模型可以保证元数据的可追溯性和完整性,有较好的并发处理能力,对数据存储的效率影响较小。 相似文献
2.
针对联邦学习客户端数据集的存储任务分配问题构建新型模型,为保证去中心化云存储网络的负载均衡,缩短存储数据上传/恢复时间,减少客户端存储总花费,提出了一种考虑客户端需求和全局负载的数据存储任务分配算法——URGL_allo(Allocation Based on User Requirements and Global Load)算法。在节点分配阶段考虑全局负载、拓扑属性及客户端关注的存储价格和数据恢复时间等节点资源,结合万有引力定律定义新的节点排序方法,选择最佳存储任务分配节点。在链路分配阶段,使用Dijkstra算法计算以客户端节点为中心到网络中其他节点的最短路径,并选择两节点间最短路径集合中带宽值最大的路径进行分配。仿真结果表明,相比基于随机策略的分配算法(Random_allo),所提算法的负载均衡指数、客户端存储总花费分别降低了41.9%,5%,并且与基于链路带宽的贪婪算法的数据恢复时间相差不大,都稳定维持在(0,2]之间,是Random_allo算法的1/20,在全局负载和服务质量上的综合表现优于对比算法。 相似文献
3.
数据流行度去重方案中存在检测机构不诚实、数据存储不可靠等问题,提出一种面向去中心化存储的数据流行度去重模型。针对检测机构不诚实,模型结合区块链的不可篡改性与智能合约的不可抵赖性,将智能合约作为检测机构执行数据的重复性检测和流行度检测,保障了检测结果的真实性。针对数据存储不可靠问题,提出一种文件链存储结构,该结构满足数据流行度去重的要求,并通过添加辅助信息的方式,建立分布在不同存储节点中实现物理/逻辑上传的分片之间的逻辑关系,为流行度数据去中心化网络存储提供基础;同时,在数据块信息中添加备份标识,借助备份标识将存储网络划分为两个虚拟存储空间,分别实现数据和备份数据的检测与存储,满足了用户备份需求。安全性分析和性能分析表明,该方案具有可行性,保障了检测结果的真实性,并提高了数据存储的可靠性。 相似文献
4.
5.
为扩展纠删码在区块链中的应用,研究了去中心化存储系统中的修复机制,发现系统中RS码存在多节点修复成本高、效率低的问题.针对这个问题,本文提出一种更契合去中心化网络环境下的多节点修复传输模型DSMR,充分利用RS码在修复多节点时的数据冗余性和计算冗余性.通过节点稳定性和网络跳数来选择节点、构造数据传输并行结构、分组修复计... 相似文献
6.
鄢喜爱 《网络安全技术与应用》2014,(9):62-63
分布式存储系统中,多节点故障出现的概率很高,必须考虑容错问题.RS编码由于性能高、实现简单而被广泛使用.本文介绍了常用的存储容错技术,描述了基于RS编码的存储容错算法,并引入了一个实例进行了详细分析. 相似文献
7.
近年来区块链技术取得广泛关注,涌现出众多基于区块链技术的新型应用,其中以 StorJ、Filecoin为代表的去中心化存储应用取得了较好的市场反响。对比传统中心化存储,去中心化存储为用户提供了全新的数据存储思路,令用户在获得更好的服务伸缩性的同时,有效降低数据存储的成本。但在现有的去中心化存储方案中,用户的隐私不能得到有效保护。基于此,介绍了一种利用加密搜索技术对去中心化存储方案进行加强的方法。新方法将动态累加器算法引入加密搜索过程中,保障用户存储内容隐私并提供了更好的加密搜索性能。 相似文献
8.
9.
李红俊 《数字社区&智能家居》2021,(21):146-149
区块链去中心化金融组织的模式,既不同于传统企业管理模式,又不同于区块链开源社区,这是一种全新的组织治理模式.传统企业的管理特点是所有权与决策权的分离,区块链开源社区的特点是将二者合二为一,去中心化金融组织的特点是介于二者之间.与传统银行相比,这种新型组织模式有着存贷灵活、清算效率高等优点,但同样也存在代码漏洞、决策效率... 相似文献
10.
《计算机应用与软件》2014,(8)
通过将(n,k)-RS编码和X编码结合,为云存储系统设计一类新的准确修复编码——X再生码。它具有容忍n-k个节点故障的可靠性,并且当系统中单个或者两个节点出现故障时,仅需从少量的节点下载数据块,使用简单的异或运算即可修复。对X再生码的存储代价、修复带宽、修复局部性(修复过程中需要连接的节点数)和编码率进行分析,并与RS编码、SRC以及LRC进行对比。结果显示,X再生码在一个或者两个节点故障时,修复局部性以及修复带宽上都具有显著的优势,并能达到任意高的编码率。 相似文献
11.
The optimal load redistribution problem is solved in this paper for heterogeneous non‐dedicated service grids in a decentralized way. A coordination policy is proposed to make networked servers reach their optimal generic task acceptance rates in order to minimize the average service time of all the generic tasks in a grid. Autonomous servers networked in the grid only need to coordinate with their neighbors iteratively, and their optimal generic acceptance rates are reached by task migration among them. The design scheme of the policy is introduced, and the convergence properties and the implementation aspects of the coordination system are discussed in detail in this paper. A set of computer simulations have been conducted, validating the effectiveness of the proposed approach. Copyright © 2010 John Wiley & Sons, Ltd. 相似文献
12.
Modern single- and multi-processor computer systems incorporate, either directly or through a LAN, a number of storage devices
with diverse performance characteristics. These storage devices have to deal with workloads with unpredictable burstiness.
A storage-aware caching scheme—that partitions the cache among the disks, and aims at balancing the work across the disks—is
necessary in this environment. Moreover, maintaining proper size for these partitions is crucial. Adjusting the partition
size after each epoch (a certain time interval) assumes that the workload in the subsequent epoch will show similar characteristics
as observed in the current epoch. However, in an environment with highly bursty and time-varying workload such an approach
seems to be optimistic. Moreover, the existing storage-aware caching schemes assume linear relationship between cache size
and hit ratio. But, in practice a (disk) partition may accumulate cache blocks (thus, choke the remaining disks) without increasing the hit ratio significantly. This disk choking phenomenon may degenerate the performance
of the disk system. In this paper, we address the issues of continuous repartitioning and disk choking. First, we present a caching scheme that continuously adjusts the partition size forgoing any periodic activity. Later, considering
the disk choking issue, we present a repartitioning framework based on the notion of marginal gains. Experimental results
show the effectiveness of our approach. We show that our scheme outperforms the existing storage-aware caching schemes. 相似文献
13.
The Journal of Supercomputing - Attribute-based encryption(ABE) can enable user-centered data sharing in untrusted cloud scenario where users usually lack control on their outsourced data. However,... 相似文献
14.
Data broadcasting is an efficient method to disseminate information to a large group of requesters with common interests. Performing such broadcasts typically involve the determination of a broadcast schedule intended to maximize the quality of service provided by the broadcast system. Earlier studies have proposed solutions to this problem in the form of heuristics and local search techniques designed to achieve minimal deadline misses or maximal utility. An often ignored factor in these studies is the possibility of the data items being not available locally, but rather have to be fetched from data servers distributed over a network, thereby inducing a certain level of stochasticity in the actual time required to serve a data item. This stochasticity is introduced on behalf of the data servers which themselves undergo a dynamic management of serving data requests. In this paper we revisit the problem of real time data broadcasting under such a scenario. We investigate the efficiency of heuristics that embed the stochastic nature of the problem in their design and compare their performance with those proposed for non-stochastic broadcast scheduling. Further, we extend our analysis to understand the various factors in the problem structure that influence these heuristics, and are often exploited by a better performing one. 相似文献
15.
提出了一种采用两步筛选的混合快速分形编码算法。首先将码本按照矩不变量进行分类,然后寻找给定Range块在所属区间的最好匹配码块,对于匹配误差值大于给定阈值的Range块再进行基于熵值的二次编码。与基于矩不变量的算法比较,该方法在峰值信噪比相同的情况下时间效率提高五倍多,与基于信息熵的算法相比,PSNR值提高近一个分贝。 相似文献
16.
A contour-based scheme for near lossless shape coding is proposed aiming to acquire high coding efficiency. For a given shape image, object contours are firstly extracted and then thinned to be perfect single-pixel width. Next they are transformed into chain-based representation and divided into different chain segments based on link directions. Thirdly, two fundamental coding modes are designed and developed to encode different types of chain segments, where the spatial correlations within object contours are analyzed and exploited to improve the coding efficiency as high as possible. Finally, a fast and efficient mode selection method is introduced to select the one that can produce shorter code length out of the two modes for each chain segment. Experiments are conducted and the results show that the proposed scheme is considerably more efficient than the existing techniques. 相似文献
17.
传统的云计算存储系统为保障可用性,一般使用镜像冗余备份而产生大量冗余备份数据,影响了存储数据空间的利用效率。针对此情况,为减少备份数据对存储空间的占用,提出一种存储方案。放弃了镜像冗余备份,引入校验编码的方式进行备份,以减少备份数据;同时采用了冲突跳转的机制对备份进行验证,在保证备份数据有效性的前提下减少备份数量。通过模拟程序运行结果与主流云存储方案的对比表明,所提存储方案在保证数据可靠性的同时,显著地降低了分布存储对磁盘空间的占用。 相似文献
18.
19.
针对数据存储中安全问题,提出了一种由传统机械磁盘、固态硬盘和专用控制芯片( SoC)组成的混合硬盘的安全存储方案,并探讨了其中采用SoC的注册码实现身份认证,采用SoC内置可配置的AES128/256、3DES等加密模块对数据进行加密保护的方法,结合混合硬盘结构还探讨了多用户引导技术、信息备份技术、及时还原技术等多种数据保护方法。 相似文献
20.
In this paper, we propose a novel decentralized resource maintenance strategy for peer-to-peer (P2P) distributed storage networks. Our strategy relies on the Wuala overlay network architecture, (The WUALA Project). While the latter is based, for the resource distribution among peers, on the use of erasure codes, e.g., Reed–Solomon codes, here we investigate the system behavior when a simple randomized network coding strategy is applied. We propose to replace the Wuala regular and centralized strategy for resource maintenance with a decentralized strategy, where users regenerate new fragments sporadically, namely every time a resource is retrieved. Both strategies are analyzed, analytically and through simulations, in the presence of either erasure and network coding. It will be shown that the novel sporadic maintenance strategy, when used with randomized network coding, leads to a fully decentralized solution with management complexity much lower than common centralized solutions. 相似文献