首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
《Parallel Computing》1997,23(12):1727-1742
A server for an interactive distributed multimedia system may require thousands of gigabytes of storage space and high I/O bandwidth. In order to maximize system utilization, and thus minimize cost, the load must be balanced among the server's disks, interconnection network and scheduler. Many algorithms for maximizing retrieval capacity from the storage system have been proposed. This paper presents techniques for improving server capacity by assigning media requests to the nodes of a server so as to balance the load on the interconnection network and the scheduling nodes. Five policies for dynamic request assignment are developed. An important factor that affects data retrieval in a high-performance continuous media server is the degree of parallelism of data retrieval. The performance of the dynamic policies on an implementation of a server model developed earlier is presented for two values of the degree of parallelism.  相似文献   

2.
Padmavathi  Poorva   《Computer Networks》2006,50(18):3608-3621
In this paper, we address the server selection problem for streaming applications on the Internet. The architecture we consider is similar to the content distribution networks consisting of geographically dispersed servers and user populations over an interconnected set of metropolitan areas. Server selection issues for Web-based applications in such an environment have been widely addressed; the selection is mostly based on proximity measured using packet delay. Such a greedy or heuristic approach to server selection will not address the capacity planning problem evident in multimedia applications. For such applications, admission control becomes an essential part of their design to maintain Quality of Service (QoS). Our objective in providing a solution to the server selection problem is threefold: first, to direct clients to the nearest server; second, to provide multiple sources to diffuse network load; third, to match server capacity to user demand so that optimal blocking performance can be expected. We accomplish all three objectives by using a special type of Linear Programming (LP) formulation called the Transportation Problem (TP). The objective function in the TP is to minimize the cost of serving a video request from user population x using server y as measured by network distance. The optimal allocation between servers and user populations from TP results in server clusters, the aggregated capacity of each cluster designed to meet the demands of its designated user population. Within a server cluster, we propose streaming protocols for request handling that will result in a balanced load. We implement threshold-based admission control in individual servers within a cluster to enforce the fair share of the server resource to its designated user population. The blocking performance is used as a trigger to find new optimal allocations when blocking rates become unacceptable due to change in user demands. We substantiate the analytical model with an extensive simulation for analyzing the performance of the proposed clustered architecture and the protocols. The simulation results show significant difference in overall blocking performance between optimal and suboptimal allocations in as much as 15% at moderate to high workloads. We also show that the proposed cluster protocols result in lower packet loss and latencies by forcing path diversity from multiple sources for request delivery.  相似文献   

3.
Nowadays, large service centers provide computational capacity to many customers by sharing a pool of IT resources. The service providers and their customers negotiate utility based Service Level Agreement (SLA) to determine the costs and penalties on the base of the achieved performance level. The system is often based on a multi-tier architecture to serve requests and autonomic techniques have been implemented to manage varying workload conditions. The service provider would like to maximize the SLA revenues, while minimizing its operating costs. The system we consider is based on a centralized network dispatcher which controls the allocation of applications to servers, the request volumes at various servers and the scheduling policy at each server. The dispatcher can also decide to turn ON or OFF servers depending on the system load. This paper designs a resource allocation scheduler for such multi-tier autonomic environments so as to maximize the profits associated with multiple class SLAs. The overall problem is NP-hard. We develop heuristic solutions by implementing a local-search algorithm. Experimental results are presented to demonstrate the benefits of our approach.  相似文献   

4.
可扩展并行Web服务器集群的实现技术   总被引:12,自引:1,他引:12  
随着INTERNET用户和流量的不断增长,对Web站点的性能提出了更高的要求,以缩短用户请求的响应时间。该文介绍了高性能的可扩展并行Web服务器集群的工作原理和实现机制,并说明了Web服务器集群的应用前景和发展趋势。  相似文献   

5.
In conventional video-on-demand systems, video data are stored in a video server for delivery to multiple receivers over a communications network. The video server's hardware limits the maximum storage capacity as well as the maximum number of video sessions that can concurrently be delivered. Clearly, these limits will eventually be exceeded by the growing need for better video quality and larger user population. This paper studies a parallel video server architecture that exploits server parallelism to achieve incremental scalability. First, unlike data partition and replication, the architecture employs data striping at the server level to achieve fine-grain load balancing across multiple servers. Second, a client-pull service model is employed to eliminate the need for interserver synchronization. Third, an admission-scheduling algorithm is proposed to further control the instantaneous load at each server so that linear scalability can be achieved. This paper analyzes the performance of the architecture by deriving bounds for server service delay, client buffer requirement, prefetch delay, and scheduling delay. These performance metrics and design tradeoffs are further evaluated using numerical examples. Our results show that the proposed parallel video server architecture can be linearly scaled up to more concurrent users simply by adding more servers and redistributing the video data among the servers  相似文献   

6.
High-performance servers and high-speed networks will form the backbone of the infrastructure required for distributed multimedia information systems. A server for an interactive distributed multimedia system may require thousands of gigabytes of storage space and a high I/O bandwidth. In order to maximize the system utilization, and thus minimize the cost, it is essential that the load be balanced among each of the server's components, viz. the disks, the interconnection network and the scheduler. Many algorithms for maximizing retrieval capacity from the storage system have been proposed in the literature. This paper presents techniques for improving the server capacity by assigning media requests to the nodes of a server so as to balance the load on the interconnection network and the scheduling nodes. Five policies for request assignment-round-robin (RR), minimum link allocation (MLA), minimum contention allocation (MCA), weighted minimum link allocation (WMLA) and weighted minimum contention allocation (WMCA)-are developed. The performance of these policies on a server model developed by the authors (1995) is presented. We also consider the issue of file replication, and develop two schemes for storing the replicas: the parent group-based round-robin placement (PGBRRP) scheme, and the group-wide round-robin placement (GWRRP) scheme. The performance of the request assignment policies in the presence of file replication is presented  相似文献   

7.
多业务流媒体服务系统的自适应服务组合算法   总被引:1,自引:0,他引:1  
服务组合可以将网络上多种异构服务重新组合,形成新的业务.本文在流媒体应用中,采用三层服务组合架构,通过检测用户点播行为的改变,触发流媒体服务组合自适应调整,以最小化用户整体点播延迟为目标,在保证用户QoS的同时,最大化利用服务器现有的资源,从而扩充服务器服务能力,降低用户点播的拒绝率.试验仿真结果表明流媒体服务集群系统性能得到了大幅度的提高.  相似文献   

8.
Minimizing delivery cost in scalable streaming content distribution systems   总被引:1,自引:0,他引:1  
Recent scalable multicast streaming protocols for on-demand delivery of media content offer the promise of greatly reduced server and network bandwidth. However, a key unresolved issue is how to design scalable content distribution systems that place replica servers closer to various client populations and route client requests and response streams so as to minimize the total server and network delivery cost. This issue is significantly more complex than the design of distribution systems for traditional Web files or unicast on-demand streaming, for two reasons. First, closest server and shortest path routing does not minimize network bandwidth usage; instead, the optimal routing of client requests and server multicasts is complex and interdependent. Second, the server bandwidth usage increases with the number of replicas. Nevertheless, this paper shows that the complex replica placement and routing optimization problem, in its essential form, can be expressed fairly simply, and can be solved for example client populations and realistic network topologies. The solutions show that the optimal scalable system can differ significantly from the optimal system for conventional delivery. Furthermore, simple canonical networks are analyzed to develop insights into effective heuristics for near-optimal placement and routing. The proposed new heuristics can be used for designing large and heterogeneous systems that are of practical interest. For a number of example networks, the best heuristics produce systems with total delivery cost that is within 16% of optimality.  相似文献   

9.
This paper presents the embedded realization and experimental evaluation of a media stream scheduler on network interface (NI) CoProcessor boards. When using media frames as scheduling units, the scheduler is able to operate in real-time on streams traversing the CoProcessor, resulting in its ability to stream video to remote clients at real-time rates. This paper presents a detailed evaluation of the effects of placing application or kernel-level functionality, like packet scheduling on NIs, rather than the host machines to which they are attached. The main benefits of such placement are: 1) that traffic is eliminated from the host bus and memory subsystem, thereby allowing increased host CPU utilization for other tasks, and 2) that NI-based scheduling is immune to host-CPU loading, unlike host-based media schedulers that are easily affected even by transient load conditions. An outcome of this work is a proposed cluster architecture for building scalable media servers by distributing schedulers and media stream producers across the multiple NIs used by a single server and by clustering a number of such servers using commodity network hardware and software.  相似文献   

10.
移动边缘计算(MEC)的出现使移动用户能够以低延迟访问部署在边缘服务器上的服务。然而,MEC仍然存在各种挑战,尤其是服务部署问题。边缘服务器的数量和资源通常是有限的,只能部署数量有限的服务;此外,用户的移动性改变了不同服务在不同地区的流行度。在这种情况下,为动态请求部署合适的服务就成为一个关键问题。针对该问题,通过了解动态用户请求来部署适当的服务以最小化交互延迟,将服务部署问题表述为一个全局优化问题,并提出了一种基于集群划分的资源聚合算法,从而在计算、带宽等资源约束下初步部署合适的服务。此外,考虑动态用户请求对服务流行度及边缘服务器负载的影响,开发了动态调整算法来更新现有服务,以确保服务质量(QoS)始终满足用户期望。通过一系列仿真实验验证了所提出策略的性能。仿真结果表明,与现有基准算法相比,所提出的策略可以降低服务交互延迟并实现更稳定的负载均衡。  相似文献   

11.
Cloud computing aims to provide dynamic leasing of server capabilities as scalable virtualized services to end users. However, data centers hosting cloud applications consume vast amounts of electrical energy, thereby contributing to high operational costs and carbon footprints. Green cloud computing solutions that can not only minimize the operational costs but also reduce the environmental impact are necessary. This study focuses on the Infrastructure as a Service model, where custom virtual machines (VMs) are launched in appropriate servers available in a data center. A complete data center resource management scheme is presented in this paper. The scheme can not only ensure user quality of service (through service level agreements) but can also achieve maximum energy saving and green computing goals. Considering that the data center host is usually tens of thousands in size and that using an exact algorithm to solve the resource allocation problem is difficult, the modified shuffled frog leaping algorithm and improved extremal optimization are employed in this study to solve the dynamic allocation problem of VMs. Experimental results demonstrate that the proposed resource management scheme exhibits excellent performance in green cloud computing.  相似文献   

12.
向洁  丁恩杰 《计算机应用》2013,33(12):3331-3334
随着数据中心的快速发展,其能耗问题已经愈发突出,数据中心节能机制已成为研究热点;但大多节能机制并未充分考虑数据中心的异构性,如不同时间购置的服务器之间存在差异。为此引入代表服务器能耗效率的能效比(Performance/Power)作为参数,提出一种基于虚拟机调度的节能算法PVMAP,动态整合虚拟机时优先充分使用能效比高的服务器,从而尽量减少虚拟机迁移次数和同时运行的服务器数量。仿真实验结果表明,算法能够在节能的同时保证服务质量(QoS),比其他算法具有更好的稳定性和可扩展性。  相似文献   

13.
Zari  M. Saiedian  H. Naeem  M. 《Computer》2001,34(12):30-37
Slow performance costs e-commerce Web sites as much as $4.35 billion annually in lost revenue. Perceived latency-the amount of time between when a user issues a request and receives a response-is a critical issue. Research into improving performance falls into two categories: work on servers and work on networks and protocols. On the server side, previous work has focused on techniques for improving server performance. Such studies show how Web servers behave under a range of loads. These studies often suggest enhancements to application implementations and the operating systems those servers run. On the network side, research has focused on improving network infrastructure performance for Internet applications. Studies focusing on network dynamics have resulted in several enhancements to HTTP, including data compression, persistent connections, and pipelining. These improvements are all part of HTTP 1.1. However, little work has been done on common latency sources that cause the overall delays that frustrate end users. The future of performance improvements lies in developing additional techniques to help implement efficient, scalable, and stable improvements that enhance the end-user experience  相似文献   

14.
在Darwin流媒体服务器的基础上,首先使用VTune工具对其调度算法进行优化;然后改进了Darwin服务器的调度算法,并增加了一个监控服务器状态的实时监控线程(monitor thread),该线程把服务器的运行状态和参数发送给调度器,调度器根据各个服务器发送过来的参数进行对比,把客户的请求分配给当前负载最小的服务器; 最后,通过搭建集群实验系统,验证了本文所做的工作是有效的.  相似文献   

15.
Multimedia computing is rapidly emerging as the next generation standard for human-computer interaction. One class of multimedia applications that has been gaining much attention is the real-time display of continuous media data such as video and audio, commonly known as Video-On-Demand (VOD) service. Although advances in computer and network technologies have made VOD service feasible, providing guaranteed quality, real-time video delivery still poses many technical challenges. One such challenge involves the transmission of continuous media traffic over high-speed networks.In this paper, we present an algorithm for determining the minimum buffer requirement for avoiding overflow or underflow at the client video display process, allowing the network scheduler at the VOD server to enforce a constant bit rate delivery of variable bit rate encoded continuous media. This strategy results in reduced congestion and cell loss at the network switch, and in simplified admission control parameters. Initial results indicate that buffer requirements for typical video streams range from 3.7 to 14.6 Megabytes, which is acceptable by today's multimedia PC standards. Further, we show that this approach increases the number of streams that can be multiplexed by a factor of 4.6 to 9.9 times when compared to peak and 90%-of-peak bandwidth allocation strategies.ECE Dept., Syracuse University, Syracuse, USACIS Dept., Syracuse University, Syracuse, USA  相似文献   

16.
Workflow management systems have been widely used in many business process management (BPM) applications. There are also a lot of companies offering commercial software solutions for BPM. However, most of them adopt a simple client/server architecture with one single centralized workflow-management server only. As the number of incoming workflow requests increases, the single workflow-management server might become the performance bottleneck, leading to unacceptable response time. Development of parallel servers might be a possible solution. However, a parallel server architecture with a fixed-number of servers cannot efficiently utilize computing resources under time-varying system workloads. This paper presents a distributed workflow-management server architecture which adopts dynamic resource provisioning mechanisms to deal with the probable performance bottleneck. We implemented a prototype system of the proposed architecture based on a commercial workflow management system, Agentflow. A series of experiments were conducted on the prototype system for performance evaluation. The experimental results indicate that the proposed architecture can deliver scalable performance and effectively maintain stable request response time under a wide range of incoming workflow request workloads.  相似文献   

17.
Recently, multimedia cloud is being considered as a new effective serving mode in e-Health area that meets the requirement of scalable and economic multimedia service for e-health. It can provide a flexible stack of powerful Virtual Machine (VM) resources of cloud like CPU, memory, storage, network bandwidth etc. on demand to manage e-health media services and applications (e.g. medical image/video retrieval, health video transcoding, streaming, video rendering, sharing and delivery) at lower cost. However, one major issue here is how to efficiently allocate VM resources dynamically based on e-health applications’ QoS demands and support energy and cost savings by optimizing the number of servers in use. In order to solve this problem, we propose a cost effective and dynamic VM allocation model based on Nash bargaining solution. With extensive simulations it is shown that the proposed mechanism can reduce the overall cost of running servers while at the same time guarantee QoS demand and maximize resource utilization in various dimensions of server resources.  相似文献   

18.
流媒体服务器服务能力基准实验与性能模型   总被引:3,自引:2,他引:3  
流媒体服务提供商需要了解如何对服务器的服务能力进行测试,如何对系统实时负荷进行估计.本文提出了一组基准实验,测量服务内容为变码率视频时,服务器提供不同质量和方式的视频点播服务的能力,得到与负载相关的服务器性能模型和实时负荷估计方法.实际系统上的验证实验表明,该性能模型可以准确刻画服务器的实时负荷.  相似文献   

19.
A distributed multiserver Web site can provide the scalability necessary to keep up with growing client demand at popular sites. Load balancing of these distributed Web-server systems, consisting of multiple, homogeneous Web servers for document retrieval and a Domain Name Server (DNS) for address resolution, opens interesting new problems. In this paper, we investigate the effects of using a more active DNS which, as an atypical centralized scheduler, applies some scheduling strategy in routing the requests to the most suitable Web server. Unlike traditional parallel/distributed systems in which a centralized scheduler has full control of the system, the DNS controls only a very small fraction of the requests reaching the multiserver Web site. This peculiarity, especially in the presence of highly skewed load, makes it very difficult to achieve acceptable load balancing and avoid overloading some Web servers. This paper adapts traditional scheduling algorithms to the DNS, proposes new policies, and examines their impact under different scenarios. Extensive simulation results show the advantage of strategies that make scheduling decisions on the basis of the domain that originates the client requests and limited server state information (e.g., whether a server is overloaded or not). An initially unexpected result is that using detailed server information, especially based on history, does not seem useful in predicting the future load and can often lead to degraded performance  相似文献   

20.
Disk scheduling in video editing systems   总被引:2,自引:0,他引:2  
Modern video servers support both video-on-demand and nonlinear editing applications. Video-on-demand servers enable the user to view video clips or movies from a video database, while nonlinear editing systems enable the user to manipulate the content of the video database. Applications such as video and news editing systems require that the underlying storage server be able to concurrently record live broadcast information, modify prerecorded data, and broadcast an authored presentation. A multimedia storage server that efficiently supports such a diverse group of activities constitutes the focus of this study. A novel real-time disk scheduling algorithm is presented that treats both read and write requests in a homogeneous manner in order to ensure that their deadlines are met. Due to real-time demands of movie viewing, read requests have to be fulfilled within certain deadlines; otherwise, they are considered lost. Since the data to be written into disk is stored in main memory buffers, write requests can be postponed until critical read requests are processed. However, write requests still have to be processed within reasonable delays and without the possibility of indefinite postponement. This is due to the physical constraint of the limited size of the main memory write buffers. The new algorithm schedules both read and write requests appropriately, to minimize the amount of disk reads that do not meet their presentation deadlines, and to avoid indefinite postponement and large buffer sizes in the case of disk writes. Simulation results demonstrate that the proposed algorithm offers low violations of read deadlines, reduces waiting time for lower priority disk requests, and improves the throughput of the storage server by enhancing the utilization of available disk bandwidth  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号