首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
关于无线网络通信流量优化管理问题,当前网络在遭受到异常攻击,同时由于大规模登录时,流量会发生短时间巨幅变化,使得单个节点面临瘫痪风险.传统的网络调度算法是基于单个节点的分流完成的,一旦流量变化巨大,会造成分流失败,引起网络瘫痪.提出了一种多任务多点映射分解算法的网络突变流量分解管理方法.将Map函数与Reduce函数相结合,能够将网络特征构成的数据集合传递到分布式网络文件系统中,从而降低网络文件需要耗费的存储时间.将网络流量突然增大的相关特征与数据库中的任务进行分解处理,能够获取大量的子任务,完成网络突变流量的分解管理.实验结果表明,改进算法进行网络突变流量分解管理,能够极大地提高网络突变流量分解管理方法的性能,提高网络通信效率.  相似文献   

2.
针对目前国家卫星气象中心FTP服务器负载过重的问题,提出了一种基于网络硬盘和文件同步技术的跨平台文件共享系统.系统采用网络硬盘技术实现气象卫星数据共享,并利用文件差异同步算法更新数据.对于Windows和Linux 平台,分别采用事件触发模式和Inotify机制来实时监控文件更新变化.在安全管理方面,采用基于RBAC的权限控制方案,为系统提供安全的权限控制机制,并在.NET平台上给予实现.该系统实现了数据批量下载、文件同步更新、安全管理以及权限设置等功能.应用实践表明,跨平台文件共享系统能够提供可靠、有效的数据共享服务,为用户获取数据提供更便捷的方式.  相似文献   

3.
新软推荐     
你有我也有软件名称:同步专家推荐指数:★★★☆同步专家,顾名思义,用于同步文件,与同类软件相比,它支持网络同步,只要事先设置好主机上(也就是作为基准的计算机)的目录和本机中要同步的目录,以后就可以很方便地做同步更新了。软件的设置项目也很丰富,可以指定需要同步的文件类  相似文献   

4.
由新型非易失存储介质构成的持久性内存(persistent memory,PM)具有扩展性强、按字节访问与静态能耗低等特性,为未来主存与辅存融合提供了强大的契机.然而由于LLC(last level cache)具有易失性且与主存交互粒度通常为64B,而PM的原子持久化操作粒度为8B.因此,数据从LLC更新到PM的过程中,若发生故障,则可能破坏更新操作的失败原子性,进而影响原始数据的完整性.为了保证更新操作的失败原子性,目前研究主要采用显式调用持久化指令与内存屏障指令,将数据有序地持久化到PM上,但该操作会造成显著的开销,在索引更新中尤为明显.在对索引进行更新时,往往会涉及到索引结构的变化,该变化需要大量的有序持久化开销.研究旨在减少基于PM的B+树在更新过程中为保证失败原子性而引入的持久化开销.通过分析B+树节点利用率、不同更新模式下持久化开销以及更新操作之间的关系,提出了一种基于节点内数据真实分布的数据单向移动算法.通过原地删除的方式,减少删除带来的持久化开销.利用删除操作在节点内留下的空位,减少后续插入操作造成的数据移动,进而减少数据持久化开销.基于上述算法,对B+树的重均衡操作进行优化.最后通过实验证明,相较于最新基于PM的B+树,提出的单向移动B+树能够显著提高单一负载与混合负载性能.  相似文献   

5.
分布式目录同步的冲突处理与副本管理研究   总被引:1,自引:0,他引:1  
随着用户拥有电脑数目的增多,为了维护多机之间数据的一致性,目录同步正成为十分普遍的应用.然而在分布式环境下进行目录同步,因客户机频繁接入网络,造成数据传输延时不可知,并发操作识别需要重新界定,修改冲突也无法采用传统加锁机制处理.为了解决上述问题,提出了MSVerion算法,该算法融合了SVN的冲突发现和Vector Clock的副本管理,能在合并变化文件时快速地发现冲突,减少分布式环境下文件副本管理中需要保存的数据量.同时遵循最终一致性理论,给出了目录同步中3种冲突操作的解决策略.  相似文献   

6.
针对现有文件数据同步传输方法效率低、局部更新困难的问题,提出一种哈希链构建及文件数据同步方法。将C/S架构中服务器端文件或目录的变化作为一系列哈希节点,根据时间先后顺序,通过哈希函数迭代文件或目录的哈希值,形成能够记录文件库所有操作状态的有序哈希链。客户端只需根据哈希链节点执行相同文件操作并进行同步更新,而不需要对每个文件数据进行同步认证,确保文件库的完整性、不可抵赖性、可溯源性和防篡改性。采用有序哈希链的同步方法对不同终端进行文件数据差异监视和一致性检测,以快速获取文件变化并进行逻辑同步。实验结果表明,该方法在文件库未变动模式下的平均同步加速比为94.85%,在文件库变动的模式下,相较于“quick check”策略和常规策略的Rsync算法,平均同步加速比分别为6.5%和69.99%。有效地减少了同步过程中时间和资源的消耗。  相似文献   

7.
基于数据库的机群检查点的研究与实现   总被引:1,自引:0,他引:1  
本文提出一种新的应用级机群检查点实现方案 .给出了与现有方案不同的方法 :首先 ,采用关系数据库系统来代替以前采用文件的方式来存储机群系统的检查点、管理数据、资源情况等信息 ,便于数据的索引与归一化 ,并且 ,当数据规模非常大时 ,数据库支持的访问速度要优于基于文件系统的访问速度 .其次 ,采用独立的服务器 ,使得这些检查点以及其他相关操作对机群系统本身的运算影响最小 ,并且对这个独立的管理服务器作镜像容错处理 ,在成本和效率上要优于为每个计算节点作镜像容错处理  相似文献   

8.
基于分布式数据库的维修资料管理系统设计   总被引:5,自引:1,他引:4  
以往的维修资料管理系统存在着数据分散使用和集中管理的矛盾.为了解决此问题,基于Oracle9i分布式数据库设计了一种装备维修资料管理信息系统.采用数据库链接来更新联网节点,采用导入/导出文件的方法来进行未联网节点的同步.为了取得访问速度与管理方便性间的平衡,根据不同资料类型,数据分别采取存放在BLOB字段和文件系统中两种方法.通过这些手段构建了一个数字化、网络化的维修资料共享和管理平台.  相似文献   

9.
在Web应用中,文件上传是一个常用的功能,而目前的文件上传方式在处理大文件上传方面不尽人意,常常因为文件过大或者网络中断导致上传失败,不得不重新上传。随着HTML5相关技术的发展,出现了一系列对文件操作的API,如FileList、Blob、File、FileReader等接口,使得Web端能够使用JavaScript对本地文件进行分片操作进而实现文件断点续传功能。本文在此基础上解决了服务器端文件合并过程中用户等待超时问题以及如何保证合并文件正确性的问题。  相似文献   

10.
集群文件系统是当前存储系统的研究热点.在资源一定的条件下,存储系统中元数据服务器和数据存储服务器节点及客户端节点数量之间的配置比例会对系统性能产生较大的影响.分析了lustre集群文件系统的参数配置,针对两种典型应用环境:文件服务和Web服务,测试了不同节点数和不同lustre条块大小配置下的系统性能;通过对比分析,得出lustre集群文件系统最优化配置,为提高集群文件系统性能提供参考.测试结果显示.当lustre文件系统的OST节点数与client节点数相当时系统性能最好.  相似文献   

11.
With the growth of P2P file sharing systems, people are no longer satisfied with the sharing of the read-only and static files, and thus the systems with mutually writable and dynamic files have emerged, resulting in the replica inconsistency problem. To maintain the replica consistency, too many update messages need to be redundantly propagated due to the lack of the sharing of globally updated path information in the existing strategies. To address this problem, we propose an optimized strategy for update path selection, which makes the nodes share the update path information by using clone, variation and crossover operations for the update paths. We also present a repeated update strategy to cope with the churn problem so as to maintain replica consistency as far as possible even if some nodes temporarily leave the network. The simulation results show that our strategy can significantly reduce the number of the redundant update messages without lowering the message coverage, thus improving the availability of the unstructured P2P networks.  相似文献   

12.
The design of distributed processing systems has increasingly drawn the attention of systems designers as more and more organizations recognize the advantages of decentralizing operations. However, determination of an optimal or preferred distributed data base configuration is a nontrivial task. A viable model for allocating files to nodes in a distributed system must consider the tradeoffs between costs and service and the unique characteristics of the system, including the planned redundancy inherent in the file organization. It also must be solvable within the current state of the art.This paper presents a mixed-integer linear programming model which allows assignment of replicated and/or partitioned files to the nodes of a distributed network while considering query, update and storage costs associated with the file assignments. The resulting model, though large, is solvable even for networks with a substantial number of nodes and links. Because the model relies on an average communication cost for determining file location, results are presented comparing the model performance to that of a model utilizing exact communication costs.  相似文献   

13.
The allocation of data and operations to nodes in a computer communications network is a critical issue in distributed database design. An efficient distributed database design must trade off performance and cost among retrieval and update activities at the various nodes. It must consider the concurrency control mechanism used as well as capacity constraints at nodes and on links in the network. It must determine where data will be allocated, the degree of data replication, which copy of the data will be used for each retrieval activity, and where operations such as select, project, join, and union will be performed. We develop a comprehensive mathematical modeling approach for this problem. The approach first generates units of data (file fragments) to be allocated from a logical data model representation and a characterization of retrieval and update activities. Retrieval and update activities are then decomposed into relational operations on these fragments. Both fragments and operations on them are then allocated to nodes using a mathematical modeling approach. The mathematical model considers network communication, local processing, and data storage costs. A genetic algorithm is developed to solve this mathematical formulation  相似文献   

14.
构建最短路径树是动态网络研究的重要问题之一。在动态网络中,当边状态发生变化时会引发最短路径树动态的重新构建,反复地计算不仅消耗大量时间,也会导致最短路径树的频繁变化。提出一种稳定的最短路径树构造算法,使得构造的路径树在动态网络上更稳定,即更新最短路径树所需的操作数更少。该算法通过记录频繁变化的不稳定边并尽可能避免将其加入最短路径树中,从而能够高效地减少边变化带来的操作。实验结果表明,与传统的动态最短路径树算法相比,该算法可以得到更稳定的最短路径树,并且更新时间减少了57.24%,结点更新次数降低了43.6%。  相似文献   

15.
We consider a two-tier content distribution system for distributing massive content, consisting of an infrastructure content distribution network (CDN) and a large number of ordinary clients. The nodes of the infrastructure network form a structured, distributed-hash-table-based (DHT) peer-to-peer (P2P) network. Each file is first placed in the CDN, and possibly, is replicated among the infrastructure nodes depending on its popularity. In such a system, it is particularly pressing to have proper load-balancing mechanisms to relieve server or network overload. The subject of the paper is on popularity-based file replication techniques within the CDN using multiple hash functions. Our strategy is to set aside a large number of hash functions. When the demand for a file exceeds the overall capacity of the current servers, a previously unused hash function is used to obtain a new node ID where the file will be replicated. The central problems are how to choose an unused hash function when replicating a file and how to choose a used hash function when requesting the file. Our solution to the file replication problem is to choose the unused hash function with the smallest index, and our solution to the file request problem is to choose a used hash function uniformly at random. Our main contribution is that we have developed a set of distributed, robust algorithms to implement the above solutions and we have evaluated their performance. In particular, we have analyzed a random binary search algorithm for file request and a random gap removal algorithm for failure recovery.  相似文献   

16.
磁盘存取是基于光纤通道网络的SAN存储系统的目前性能瓶径,在综合和分析目前各种文件系统I/O操作工作负载的研究结果的基础上,提出了一个新的改进FC-SAN存储系统性能的方法:将各种文件系统I/O操作分为大数据量的文件读写操作、小数据量的文件读写操作和文件属性操作,大数据量的文件读写操作还是按照原来的I/O路径进行,存取物理磁盘;但其他各种文件操作包括小数据量的文件读写操作对基于内存的RAMDisk设备进行操作,实验结果显示,基于混合I/O子系统的FC-SAN存储系统的存取速率可以接近线速。  相似文献   

17.
为了快速、准确的实现一跳范围内手机用户群的文件共享,以达到手机用户之间进行协同工作的目的,在Android操作平台上设计、实施了本系统。在系统中提出了基于网络编码的可靠广播算法来保证文件传输的可靠性和快速性,并通过基于机会编码的全网同步算法来实现任一时刻手机群中各节点文件同步。通过真实实验验证了系统中文件共享的时延并不会随着手机群中节点数目的增加而增加,文件同步时网络中数据量大幅度下降,因此相比并发共享等其它方法,系统提供了一种快速、可靠的手机群文件共享和同步方法。  相似文献   

18.
随着用户存储和使用的文件数量和种类的急剧增长,现存的文件存储系统渐渐不能满足有效管理这些信息的需求.传统文件系统遵守严格的层次结构;以树状结构来组织文件;用户只能以单一化的存储路径来访问文件.为了解决这些不足,设计和开发了VFSS,它充分利用被存储文件的元数据信息,将文件存储系统和数据库技术相结合,以网状方式组织文件.VFSS提供丰富的用户接口,同时支持传统文件系统操作.  相似文献   

19.
Data replication is becoming a popular technology in many fields such as cloud storage, Data grids and P2P systems. By replicating files to other servers/nodes, we can reduce network traffic and file access time and increase data availability to react natural and man-made disasters. However, it does not mean that more replicas can always have a better system performance. Replicas indeed decrease read access time and provide better fault-tolerance, but if we consider write access, maintaining a large number of replications will result in a huge update overhead. Hence, a trade-off between read access time and write updating cost is needed. File popularity is an important factor in making decisions about data replication. To avoid data access fluctuations, historical file popularity can be used for selecting really popular files. In this research, a dynamic data replication strategy is proposed based on two ideas. The first one employs historical access records which are useful for picking up a file to replicate. The second one is a proactive deletion method, which is applied to control the replica number to reach an optimal balance between the read access time and the write update overhead. A unified cost model is used as a means to measure and compare the performance of our data replication algorithm and other existing algorithms. The results indicate that our new algorithm performs much better than those algorithms.  相似文献   

20.
Fault-tolerance is very important in cluster computing and has been implemented in many famous cluster-computing systems using checkpoint/restart mechanisms,But existent check-pointing algorithms cannot restore the states of a file system when roll-backing the running of a program,so there are many restrictions on file accesses in existent fault-tolerance systems.SCR algorithm,an algorithm based on atomic operation and consistent schedule,which can restore the states of file systems,is presented into idem-potent operations and non-idem-potent operations.systems are classified into idem-potent operations and non-idem-potent operations.A non-idem-potent operation modifies a file system‘s states,while an idem-potent operation does not.SCR algorithm tracks changes of the file system states.It logs each non-idem-potent operation used by user programs and the information that can restore the operation in disks.When check-pointing roll-backing the program,SCR algorithm will revert the file system states to the last checkpoint time.By using SCR algorithm,users are allowed to use any file operation in their programs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号