共查询到20条相似文献,搜索用时 156 毫秒
1.
2.
3.
4.
5.
6.
本文从企业的实际需求出发,总结当前备份软件存在的一些问题,根据这些备份软件备份过程中的关键技术,设计出一种linux下基于inotify机制以及Rsync算法的文件备份软件。实现不同类型同步事件的实时触发和事件类型识别,以及系统自动完成对不同文件同步事件的相应处理。利用Rsync算法计算文件差异,减少传输数据,减轻带宽压力。 相似文献
7.
在现有基于内容的文件类型识别算法基础上,针对统计特征提取方面存在的问题,采用定长和变长窗口对文件二进制内容进行划分,提取文件的统计特征,并提出将特征选择应用于文件类型识别,结合特征的广度和稳定度设计出一种特征选择评估函数选择标志特征,从而建立文件类型模型,以此为标准识别文件类型.该算法不依靠特定文件类型的结构和关键标识... 相似文献
8.
9.
VoD集群中基于Zipf定律的负载均衡算法 总被引:1,自引:0,他引:1
VoD服务器集群负载均衡策略主要涉及文件备份与分发两方面.现有大部分算法只是孤立地考虑其中一个方面,针对这种不足,提出一种负载均衡优化模型,综合考虑了VoD系统文件的备份与分发.提出了基于Zipf-like分布定律的备份算法和最小负载优先分发算法,增大了点播文件的备份率和减小了集群负载不均衡度.利用VoD文件点播率服从Zipf-like分布的特点,将不同文件按照不同优先级分组,降低了算法执行的复杂度.最后仿真实验结果表明了其正确性. 相似文献
10.
田钺 《计算机光盘软件与应用》2014,(21):117+119
原有X86服务器的系统备份和数据备份均以直连带机或使用磁带库等设备采用文件级备份技术进行,备份架构较为分散,不易进行集中管理和控制,且对于文件数据量大、文件结构复杂的数据,备份作业时间超长,无法满足生产业务对备份时间的要求。通过建立服务器集中备份系统,能够将备份策略及相关配置下发到各备份节点并集中收集备份数据,实现了所有X86架构服务器备份作业的集中管理与监控。同时该系统可通过块级文件备份技术对目标服务器的所有逻辑盘创建映像文件,代替原有文件级备份方式,大幅提升了对大规模数据的备份性能。 相似文献
11.
在分析MMPacking算法的基础上,提出了一种改进的文件分配算法。在按照MMPacking算法分配文件时,根据节点的文件累积需求度去完成文件的分配或复制,考虑了节点的剩余能力。文件分配在所有服务器节点中周期性地进行,每进行了一轮文件分配后,都要从第1个节点开始新的一轮分配。在开始新一轮分配前,服务器节点要按照服务器的剩余能力重新进行降序排列。在每轮分配中,每分配一次文件到某个服务器节点后,都要检测当前节点服务器的剩余能力是否大于下一节点的剩余能力,如果满足条件,则将重新开始新的一轮文件分配。改进后的算法降低了由于客户需求或服务器配置变化所要支出的额外成本,有效地达到了负载均衡的目的。仿真结果表明,改进后的算法优于MMPacking 算法。 相似文献
12.
近年来,对等网络(peer to peer,P2P)因其高效的分片和分发等机制,已成为大数据高效分发的关键支撑技术。针对P2P文件分发系统BitTorrent中Tracker服务器端节点选择算法没有考虑节点活跃度的问题,提出了一种基于活跃度的Tracker服务器端节点选择算法。该算法选择出活跃度高的节点来建立一个更高效的分发网络,使之更能符合请求节点的需求,帮助请求节点更加高效地完成下载任务。实验结果表明,改进后的节点选择算法可以缩短文件的下载时间,提高网络的分发效率,提升系统的性能。 相似文献
13.
苏嘉文 《计算机研究与发展》1994,31(9):18-23
本文的主要工作是在微核心之上开发一个UNIX文件服务器传统文件中,文件系统是核心的一个当然组成部分,但在微核心结构中,文件服务器与其它服务器一样只是作用户层的一个服务器,文件服务器发布自己的服务端口,接受其他部分的消息请求提供文件访问。 相似文献
14.
15.
针对大规模网络环境下的文件上传接收服务系统的需求,设计了一个集群架构的文件接收服务系统,采用多台接收服务器来实现文件接收服务,提高了系统的稳定性和可用性。根据文件传输的特点,综合考虑文件流量负载情况和服务器当前负载情况,提出了一个综合负载统计的负载均衡算法,测试结果表明基于该算法的文件接收集群系统具有较好的负载均衡效果。设计并实现了文件上传接收服务系统的负载均衡引擎,解决了系统的负载失衡问题,提高了系统的运行效率。 相似文献
16.
Using a central file server is good for interactive access to files, because of the coherency implied by a centralized design. In fact, within local area networks, this is a common case. However, distributed environments in use today may exhibit round‐trip times on the order of 50 or 100 ms. This is a problem for interactive file access to a central file server because of the resulting access times. Although aggressive caching and loosely synchronized replicas may be used for distributed file access, there are cases where the better coherency provided by a central server is still desirable. In this paper, we present ZX, a distributed file system and protocol designed with latency in mind. It can use caching, but it does not require caching or batching to address latency issues. ZX relies on a novel channel‐based file system interface. It includes find requests and leverages streaming requests to work well under high‐latency conditions. Unlike other protocols designed for distributed access to a central server, ZX tolerates round‐trip times on the order of 50 or 100 ms to access a central file server for interactive usage such as compiling shared sources, running binaries, editing documents, and other similar workloads. It can be used on UNIX using a FUSE adaptor while permitting native ZX speakers to run faster. 相似文献
17.
Tomoya Enokido Ailixier Aikebaier Makoto Takizawa 《The Journal of supercomputing》2014,69(3):1087-1102
In energy-aware systems, it is critical to discuss how to reduce the total electric power consumption of information systems. In this paper, we consider communication type applications where a server transmits a large volume of data to clients. A client first selects a server in a cluster of servers and issues a file transmission request to the server. In our previous studies, the transmission power consumption (TPC) model and the extended TPC (ETPC) model of a server to transmit files to clients are proposed. In this paper, we newly propose the transmission power consumption laxity-based (TPCLB) algorithm for a cluster of servers, which are based on the TPC and ETPC models so that the total power consumption in a cluster of servers can be reduced. We evaluate the TPCLB algorithm in terms of the total power consumption and elapse time compared with the round-robin (RR) algorithm. 相似文献
18.
A middleware is proposed to optimize file fetch process in transparent computing (TC) platform. A single TC server will receive file requests of large scale distributed operating systems, applications or user data from multiple clients. In consideration of limited size of server’s memory and the dependency among files, this work proposes a middleware to provide a file fetch sequence satisfying: (1) each client, upon receiving any file, is able to directly load it without waiting for pre-required files (i.e. “receive and load”); and (2) the server is able to achieve optimization in reducing overall file fetch time cost. The paper firstly addresses the features of valid file fetch sequence generating problem in the middleware. The method solves the concurrency control problem when the file fetch is required for the multiple clients. Then it explores the methods to determine time cost for file fetch sequence. Based on the established model, we propose a heuristic and greedy (HG) algorithm. According to the simulation results, we conclude that HG algorithm is able to reduce overall file fetch time roughly by 50% in the best cases compared with the time cost of traditional approaches. 相似文献
19.
分析PVFS并行文件系统的构成,得出客户机软件、元数据服务器软件和数据服务器软件之间的接口关系,然后研究一种由PC客户机、PC元数据服务器和低价数据服务器共同构成的PVFS系统,其中客户及与元数据服务器不做重要改变,数据服务器软件需要开发修改以适应新的硬件平台,使得以更低的成本实现相同的系统或者以相同的硬件成本实现更高的性能。 相似文献
20.
We study the broadcast scheduling problem in which clients send their requests to a server in order to receive some files available on the server. The server may be scheduled in a way that several requests are satisfied in one broadcast. When files are transmitted over computer networks, broadcasting the files by fragmenting them provides flexibility in broadcast scheduling that allows the optimization of per user response time. The broadcast scheduling algorithm, then, is in charge of determining the number of segments of each file and their order of transmission in each round of transmission. In this paper, we obtain a closed form approximation formula which approximates the optimal number of segments for each file, aiming at minimizing the total response time of requests. The obtained formula is a function of different parameters including those of underlying network as well as those of requests arrived at the server. Based on the obtained approximation formula we propose an algorithm for file broadcast scheduling which leads to total response time which closely conforms to the optimum one. We use extensive simulation and numerical study in order to evaluate the proposed algorithm which reveals high accuracy of obtained analytical approximation. We also investigate the impact of various headers that different network protocols add to each file segment. Our segmentation approach is examined for scenarios with different file sizes at the range of 100 KB to 1 GB. Our results show that for this range of file sizes the segmentation approach shows on average 13% tolerance from that of optimum in terms of total response time and the accuracy of the proposed approach is growing by increasing file size. Besides, using proposed segmentation in this work leads to a high Goodput of the scheduling algorithm. 相似文献