首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 265 毫秒
1.
为提高网络应用服务器的健壮性、高并发性和用户公平性,将多阶段的事件驱动体系结构(SEDA)及其负载控制策略和多级队列调度等技术应用于服务器网络I/O的处理之中,设计和实现了基于SEDA的多阶段通用网络I/O库。对基于该网络I/O库的网络服务器CServer和Apache服务器进行了对比测试,测试结果说明CServer在吞吐量、响应时间和用户公平性上较Apache服务器都有提高,反映出多阶段通用网络I/O库在一定程度上能提高网络服务器的健壮性、高并发性和用户公平性。  相似文献   

2.
影响多媒体服务器性能的关键因素研究   总被引:7,自引:0,他引:7  
在构建大规模视频服务系统时 ,基于层次型多服务器群的体系结构在吞吐率、可扩展性、经济性等方面都有其突出的优势 ,尤其适合于在因特网上的应用 .但是 ,要充分发挥和提高视频服务系统的性能 ,还要针对一些主要的瓶颈(如服务器磁盘 I/ O带宽与网络带宽 ) ,解决好一系列的问题 .本文分析了影响多媒体视频服务器性能的一些主要因素 ,如视频服务器的体系结构、服务器与客户端之间的数据传送方式、媒体数据在视频服务器存储子系统中的分布与放置方式、对磁盘访问请求的调度、单服务器中的缓存及多服务器间协同缓存的管理、接入控制策略、流调度策略等 ,这些因素对视频服务器的性能与吞吐率有着极大的影响 .本文还介绍了一些适用于大规模视频服务系统的性能优化技术 ,如广播、批处理等流调度策略 .在构建视频服务器系统时 ,只有综合考虑这些因素 ,才能真正提高服务器乃至整个视频服务系统的吞吐率 ,并较好地满足客户的 Qo S要求  相似文献   

3.
云计算对网络的共性需求体现在无感知服务、资源动态适配、可靠性保障、虚拟化和安全性等方面,然而,虚拟化作为云计算数据中心的关键技术,就是最大化有限的物理资源提供有效的传输服务.为了解决高速网络I/O虚拟化在面临I/O密集的应用时导致的I/O性能降低问题,论文提出了网络I/O性能的优化方法:一种分层处理I/O调度的网络虚拟化模型,对云平台下的VM应用进行递进式的优化I/O策略.实验结果证明提出的模型能够提高网络I/O的性能并提高硬件资源的充分利用.  相似文献   

4.
视频服务器存储子系统的I/O优化   总被引:1,自引:1,他引:1  
视频服务器存储子系统的I/O性能决定了视频服务器的总体性能。本文提出的一种新的实时磁盘调度算法(LLF-Window)能有效地服务各种编码格式的视频流,并对传统的SCSI总线不均衡调度机制进行了改造,实验结果表明,新的磁盘调度算法和改造后的SCSI总线调度机制有效地改进了视频服务器存储子系统的I/O性能,保证了视频流的连续播放。  相似文献   

5.
虚拟化技术在为现代数据中心提供高效的服务器整合能力和灵活的应用部署能力的同时,也对数据中心服务器的I/O系统设计提出了新的需求,现有I/O资源与服务器紧密绑定的I/O体系架构将产生成本上升、资源冗余、I/O连线复杂化等一系列问题.针对上述问题,提出了一种基于单根I/O虚拟化协议(single root I/O virtualization,SR-IOV)的多根I/O资源池化方法:基于硬件的多根域间地址和ID映射机制,实现了多个物理服务器对同一 I/O设备的共享复用,有效减少单体服务器所需的设备数量和连线数量,并进一步提高服务器密度;同时提出虚拟I/O设备热插拔技术和多根共享管理机制,实现了虚拟I/O资源在服务器间的实时动态分配,提高资源的利用效率.提出的方法在可编程逻辑器件(fieid-programmable gate array,FPGA)原型系统中进行了验证,其评测表明,方法能够在实现多根I/O虚拟化共享的同时,保证各个根节点服务器获得近乎本地直连设备的I/O性能.  相似文献   

6.
将应用的I/O请求处理划分为多个阶段,为流水线技术引入网络存储提供了新思路.同时,应用的I/O请求(工作量)以并行度划分为多个子工作量来通过各流水段,这样,一批处于同一流水段的子工作量之间存在同步开销,合理划分网络存储I/O流水段、探讨I/O流水机制,对提高网络存储系统整体性能具有一定的指导及实践意义.实验表明,I/O调度采用流水线的方式,能重叠I/O处理相关各阶段的操作,提高网络存储系统I/O性能.  相似文献   

7.
I/O调度算法对磁盘阵列(RAID)性能具有至关重要的影响。虽然已有很多典型的I/O调度算法在一定负载情况下可获得较好的性能,但很难有哪一种算法在各种负载情况下均能获得很好的性能。本文提出了一种智能RAID控制模型,结合C4.5决策树和AdaBoost算法实现负载自动分类,根据负载变化和性能反馈情况动态调整I/O调度策略,实现面向应用需求的自治调度。模拟实验结果表明,自适应调度算法具有较好的适应性,在各种负载情况下优于现有的I/O调度算法,尤其适用于多线程混合负载环境的I/O性能优化。  相似文献   

8.
Linux系统中网络I/O性能改进方法的研究   总被引:2,自引:0,他引:2       下载免费PDF全文
李涛  房鼎益  陈晓江  冯健 《计算机工程》2008,34(23):142-143
选择并设计高效的网络I/O模型是改善服务器性能的关键。该文通过对Linux系统中几种网络I/O模型的分析和研究,提出3种改善网络I/O性能的方法,并讨论这3种方法在Linux系统中的实现技术。实验结果验证了该方案的有效性。  相似文献   

9.
影响VoD服务器I/O性能的关键因素   总被引:1,自引:0,他引:1  
王澄  董玮文  杨宇航 《计算机工程》2002,28(7):140-142,147
在讨论了基于网络视频点系统模型和通用服务器体系结构的基础上,主要分析了影响视频点播服务器性能I/O能力的几个关键因素,对存储设备的吞吐量,PCI总线速率,SCSI通道的速率和网卡的传输速度等对服务器I/O性能的影响进行了较全面,深入的分析,最后提高一种利用服务器集群的方式提高VoD系统I/O能力的方法。  相似文献   

10.
介绍一种基于带外虚拟化技术的网络存储系统,简称BW-VSDS,它具有以下特点:(1)采用两级带外虚拟化数据管理模型以充分发挥单个存储节点的I/O能力并释放存储网络的承载能力;(2)采用分布式数据存储管理协议以协同多个存储节点有效实现高级数据存储语义;(3)支持多种数据传输协议以适用于不同的应用环境.目前该系统已应用于视频监控、信息处理和企业办公等多个领域.  相似文献   

11.
在视频监控联网系统中,传统独立部署的媒体服务器存在因并发导致的I/O负载严重、网络带宽不足以及负载分配不均等诸多问题。云媒体服务器将整体的调度、统一的管理和性能的优化融为一体,可实现资源利用的最优化和性能提升的最大化。本文研究并设计云媒体服务器的整体架构,基于此架构对服务器集群中媒体服务器的负载评判方法进行了设计。实验结果表明,相对于传统媒体服务架构的视频监控系统,云媒体服务器可较好地实现对业务的负载均衡、容错切换等功能,可以应对媒体服务器中对视频源的突发高并发访问。  相似文献   

12.
Proxy-assisted periodic broadcast for video streaming with multiple servers   总被引:2,自引:2,他引:0  
Large scale video streaming over the Internet requires a large amount of resources such as server I/O bandwidth and network bandwidth. A number of video delivery techniques can be used to lower these requirements. Periodic broadcast by a central server combined with proxy caching offers a significant reduction of the aggregate network and server I/O bandwidth usage. However, the resources available to a single server are still limited. In this paper we propose a system with multiple geographically distributed servers. The problem of multiple servers for periodic broadcast is quite different from the problem of object location for multiple web servers. Multiple servers offer increased amount of resources and service availability and may potentially allow a further reduction of network bandwidth usage. On the other hand, the benefit of periodic broadcast mostly comes from high demand videos. With multiple servers holding a video, the demand of the video at each server is reduced. Therefore, it is a challenge to use multiple servers efficiently. We first analyze the dependence of the resource requirements on the number and locations of the servers. Based on the character of the function describing such a dependence, we formulate and solve the problem of video location and delivery, in a way that minimizes resource usage. We explore a trade-off between network and I/O bandwidth requirements. We evaluate our proposed solutions through a number of tests.
David H. C. DuEmail:
  相似文献   

13.
The system architecture of the Stony Brook Video Server (SBVS), which guarantees end-to-end real-time video playback in a client-server setting, is presented. SBVS employs a real-time network access protocol, RETHER, to use existing Ethernet hardware as the underlying communications media. The video server tightly integrates the bandwidth guarantee mechanisms for network transport and disk I/O. SBVS's stream-by-stream disk scheduling scheme optimizes the effective disk bandwidth without incurring significant scheduling overhead. To demonstrate the feasibility of the proposed architecture, we have implemented a prototype called SBVS-1, which can support five concurrent MPEG-1 video streams on an Intel 486DX2/EISA PC. To our knowledge, this system is the first video server that provides an end-to-end performance guarantee from the server's disks to the each user's display over standard Ethernet. This paper describes the implementation details of integrating network and I/O bandwidth guarantee mechanisms, and the performance measurements that drive and/or validate our design decisions. © 1997 by John Wiley & Sons, Ltd.  相似文献   

14.
This paper presents a new scheme of I/O scheduling on storage servers of distributed/parallel file systems, for yielding better I/O performance. To this end, we first analyze read/write requests in the I/O queue of storage server (we name them block I/Os), by using our proposed technique of horizontal partition. Then, all block requests are supposed to be divided into multiple groups, on the basis of their offsets. This is to say, all requests related to the same chunk file will be grouped together, and then be satisfied within the same time slot between opening and closing the target chunk file on the storage server. As a result, the time resulted by completing block I/O requests can be significantly decreased, because of less file operations on the corresponding chunk files at the low-level file systems of server machines. Furthermore, we introduce an algorithm to rate a priority for each group of block I/O requests, and then the storage server dispatches groups of I/Os by following the priority order. Consequently, the applications having higher I/O priorities, e.g. they have less I/O operations and small size of involved data, can finish at a earlier time. We implement a prototype of this server-side scheduling in the PARTE file system, to demonstrate the feasibility and applicability of the proposed scheme. Experimental results show that the newly proposed scheme can achieve better I/O bandwidth and less I/O time, compared with the strategy of First Come First Served, as well as other server-side I/O scheduling approaches.  相似文献   

15.
Loopback: exploiting collaborative caches for large-scale streaming   总被引:1,自引:0,他引:1  
In this paper, we propose a Loopback approach in a two-level streaming architecture to exploit collaborative client/proxy buffers for improving the quality and efficiency of large-scale streaming applications. At the upper level we use a content delivery network (CDN) to deliver video from a central server to proxy servers. At the lower level a proxy server delivers video with the help of collaborative client caches. In particular, a proxy server and its clients in a local domain cache different portions of a video and form delivery loops. In each loop, a single video stream originates at the proxy, passes through a number of clients, and finally is passed back to the proxy. As a result, with limited bandwidth and storage space contributed by collaborative clients, we are able to significantly reduce the required network bandwidth, I/O bandwidth, and cache space of a proxy. Furthermore, we develop a local repair scheme to address the client failure issue for enhancing service quality and eliminating most required repairing load at the central server. For popular videos, our local repair scheme is able to handle most of single-client failures without service disruption and retransmissions from the central server. Our analysis and simulations have shown the effectiveness of the proposed scheme.  相似文献   

16.
Due to the high bandwidth requirement and rate variability of compressed video, delivering video across wide area networks (WANs) is a challenging issue. Proxy servers have been used to reduce network congestion and improve client access time on the Internet by caching passing data. We investigate ways to store or stage partial video in proxy servers to reduce the network bandwidth requirement over WAN. A client needs to access a portion of the video from a proxy server over a local area network (LAN) and the rest from a central server across a WAN. Therefore, client buffer requirement and video synchronization are to be considered. We study the tradeoffs between client buffer, storage requirement on the proxy server, and bandwidth requirement over WAN. Given a video delivery rate for the WAN, we propose several frame staging selection algorithms to determine the video frames to be stored in the proxy server. A scheme called chunk algorithm, which partitions a video into different segments (chunks of frames) with alternating chunks stored in the proxy server, is shown to offer the best tradeoff. We also investigate an efficient way to utilize client buffer when the combination of video streams from WAN and LAN is considered.  相似文献   

17.
We present efficient schemes for scheduling the delivery of variable-bit-rate MPEG-compressed video with stringent quality-of-service (QoS) requirements. Video scheduling is being used to improve bandwidth allocation at a video server that uses statistical multiplexing to aggregate video streams prior to transporting them over a network. A video stream is modeled using a traffic envelope that provides a deterministic time-varying bound on the bit rate. Because of the periodicity in which frame types in an MPEG stream are typically generated, a simple traffic envelope can be constructed using only five parameters. Using the traffic-envelope model, we show that video sources can be statistically multiplexed with an effective bandwidth that is often less than the source peak rate. Bandwidth gain is achieved without sacrificing the stringency of the requested QoS. The effective bandwidth depends on the arrangement of the multiplexed streams, which is a measure of the lag between the GOP periods of various streams. For homogeneous streams, we give an optimal scheduling scheme for video sources at a video-on-demand server that results in the minimum effective bandwidth. For heterogeneous sources, a sub-optimal scheduling scheme is given, which achieves acceptable bandwidth gain. Numerical examples based on traces of MPEG-coded movies are used to demonstrate the effectiveness of our schemes.  相似文献   

18.
Developments in multimedia technology over the past decade has caused video-on-demand services to emerge as a new paradigm in home entertainment. Because of the large volume of data involved in the process and stringent continuity and real-time constraints, these services pose challenges that are different from the standard file transfer operations in the network. The necessity of efficient usage of scarce resources like network bandwidth and server capacity (in terms of I/O bandwidth) demands novel and easy-to-use schemes for scheduling continuous video streams. This paper presents an overview of the major scheduling policies that have emerged in the recent past. In particular, the paper provides detailed discussion on policies based on principles of broadcasting, batching, caching, and piggybacking or merging. Policies like look-ahead scheduling schemes that are designed exclusively to provide certain interactive VCR-like control operations are also covered. A conceptual comparison between the various classes of scheduling policies is carried out to identify common threads and key concepts. Performance of these policies in terms of bandwidth demand reduction, customer waiting time reduction, provision of interactive control by the user, and fairness of service are given special emphasis. The paper concludes with a discussion on the possible avenues of further reasearch and development in this potentially interesting and challenging field.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号