首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
Video can be encoded into multiple-resolution format in nature. A multi-resolution or scalable video stream is a video sequence encoded such that subsets of the full resolution video bit stream can be decoded to recreate lower resolution video streams. Employing scalable video enables a video server to provide multiple resolution services for a variety of clients with different decoding capabilities and network bandwidths connected to the server. The inherent advantages of the multi-resolution video server include: heterogeneous client support, storage efficiency, adaptable service, and interactive operations support.For designing a video server, several issues should be dealt with under a unified framework including data placement/retrieval, buffer management, and admission control schemes for deterministic service guarantee. In this paper, we present a general framework for designing a large-scale multi-resolution video server. First, we propose a general multi-resolution video stream model which can be implemented by various scalable compression techniques. Second, given the proposed stream model, we devise a hybrid data placement scheme to store scalable video data across disks in the server. The scheme exploits both concurrency and parallelism offered by striping data across the disks and achieves the disk load balancing during any resolution video service. Next, the retrieval of multi-resolution video is described. The deterministic access property of the placement scheme permits the retrieval scheduling to be performed on each disk independently and to support interactive operations (e.g. pause, resume, slow playback, fastforward and rewind) simply by reconstructing the input parameters to the scheduler. We also present an efficient admission control algorithm which precisely estimates the actual disk workload for the given resolution services and hence permits the buffer requirement to be much smaller. The proposed schemes are verified through detailed simulation and implementation.  相似文献   

2.
3.
Video-on-demand (VOD) service requires balanced use of system resources, such as disk bandwidth and buffer, to accommodate more clients. The data retrieval size and data rates of video streams directly affect the utilization of these resources. Given the data rates which vary widely in multi-resolution video servers, we need to determine the appropriate data retrieval size to balance the buffer with the disk bandwidth. Otherwise, the server may be unable to admit new clients even though one of the resources is available for use. To address this problem, we propose the following new schemes that work together: (1) A replication scheme called Splitting Striping units by Replication (SSR). To increase the number of admitted clients, SSR defines two sizes of striping unit, which allow data to be stored on the primary and backup copies in different ways. (2) A retrieval scheduling method which combines the merits of existing SCAN and grouped sweeping scheme (GSS) algorithms to balance the buffer and disk bandwidth usage. (3) Admission control algorithms which decide whether to read data from the primary or the backup copy. The effectiveness of the proposed schemes is demonstrated through simulations. Results show that our schemes are able to cope with various workloads efficiently and thus enable the server to admit a much larger number of clients.  相似文献   

4.
Dynamic batching policies for an on-demand video server   总被引:18,自引:0,他引:18  
In a video-on-demand environment, continuous delivery of video streams to the clients is guaranteed by sufficient reserved network and server resources. This leads to a hard limit on the number of streams that a video server can deliver. Multiple client requests for the same video can be served with a single disk I/O stream by sending (multicasting) the same data blocks to multiple clients (with the multicast facility, if present in the system). This is achieved by batching (grouping) requests for the same video that arrive within a short time. We explore the role of customer-waiting time and reneging behavior in selecting the video to be multicast. We show that a first come, first served (FCFS) policy that schedules the video with the longest outstanding request can perform better than the maximum queue length (MQL) policy that chooses the video with the maximum number of outstanding requests. Additionally, multicasting is better exploited by scheduling playback of the most popular videos at predetermined, regular intervals (hence, termed FCFS-). If user reneging can be reduced by guaranteeing that a maximum waiting time will not be exceeded, then performance of FCFS- is further improved by selecting the regular playback intervals as this maximum waiting time. For an empirical workload, we demonstrate a substantial reduction (of the order of 60%) in the required server capacity by batching.  相似文献   

5.
In this paper, we propose a new multicast scheme that is based on the client-initiated-with-prefetching (CIWP) and peer-to-peer (P2P) transfer of a partial multimedia stream. In the CIWP scheme, when a new client joins an ongoing multicast channel, the server has to create an extra unicast channel to retransmit the partial stream that has already been transmitted. However, the unicast channel consumes some of the I/O bandwidth of the server, as well as some of the network resources between the server and the client's Internet Service Provider (ISP). To solve this problem, we propose the use of the P2P transfer algorithm to deliver the partial stream from a client that has already joined the ongoing multicast session to the newcomer. This P2P transfer between clients is limited to clients belonging to the same ISP. To further improve the performance, a threshold is used to control the P2P transfer. We performed analytical studies to show that the proposed multicast scheme can reduce the consumption of the network resources of the server, by utilizing the client's disk space. We also performed various simulation studies to demonstrate the performance improvement in terms of the use of the server's bandwidth and the waiting time for the clients’ requests.  相似文献   

6.
7.
As typical applications in the field of the cloud computing, cloud storage services are popular in the development of smart cities for their low costs and huge storage capacity. Proofs-of-ownership (PoW) is an important cryptographic primitive in cloud storage to ensure that a client holds the whole file rather than part of it in secure client side data deduplication. The previous PoW schemes worked well when the file is in plaintext. However, the privacy of the clients’ data may be vulnerable to honest-but-curious attacks. To deal with this issue, the clients tend to encrypt files before outsourcing them to the cloud, which makes the existing PoW schemes inapplicable any more. In this paper, we first propose a secure zero-knowledge based client side deduplication scheme over encrypted files. We prove that the proposed scheme is sound, complete and zero-knowledge. The scheme can achieve a high detection probability of the clients’ misbehavior. Then we introduced a proxy re-encryption based key distribution scheme. This scheme ensures that the server knows nothing about the encryption key even though it acts as a proxy to help distributing the file encryption key. It also enables the clients who have gained the ownership of a file to share the file with the encryption key generated without establishing secure channels among them. It is proved that the clients’ private key cannot be recovered by the server or clients collusion attacks during the key distribution phase. Our performance evaluation shows that the proposed scheme is much more efficient than the existing client side deduplication schemes.  相似文献   

8.
Loopback: exploiting collaborative caches for large-scale streaming   总被引:1,自引:0,他引:1  
In this paper, we propose a Loopback approach in a two-level streaming architecture to exploit collaborative client/proxy buffers for improving the quality and efficiency of large-scale streaming applications. At the upper level we use a content delivery network (CDN) to deliver video from a central server to proxy servers. At the lower level a proxy server delivers video with the help of collaborative client caches. In particular, a proxy server and its clients in a local domain cache different portions of a video and form delivery loops. In each loop, a single video stream originates at the proxy, passes through a number of clients, and finally is passed back to the proxy. As a result, with limited bandwidth and storage space contributed by collaborative clients, we are able to significantly reduce the required network bandwidth, I/O bandwidth, and cache space of a proxy. Furthermore, we develop a local repair scheme to address the client failure issue for enhancing service quality and eliminating most required repairing load at the central server. For popular videos, our local repair scheme is able to handle most of single-client failures without service disruption and retransmissions from the central server. Our analysis and simulations have shown the effectiveness of the proposed scheme.  相似文献   

9.
Lin  Law Sie  Yong Khai   《Computer Communications》2006,29(18):3780-3788
In a Video-on-Demand (VoD) system, in order to guarantee smooth playback of a video stream, sufficient resources (such as disk I/O (Input/Output) bandwidth, network bandwidth) have to be reserved in advance. Thus, given limited resources, the number of simultaneous streams can be supported by a video server is restricted. Due to the mechanical nature, the I/O subsystem is generally the performance bottleneck of a VoD system, and there have been a number of caching algorithms to overcome the disk bandwidth limitation. In this paper, we propose a novel caching strategy, referred to as client-assisted interval caching (CIC) scheme, to balance the requirements of I/O bandwidth and cache capacity in a cost-effective way. The CIC scheme tends to use the cache memory available in clients to serve the first few blocks of streams so as to dramatically reduce the demand on the I/O bandwidth of the server. Our objective is to maximize the number of requests that can be supported by the system and minimize the overall system cost. Simulations are carried out to study the performance of our proposed strategy under various conditions. The experimental results show the superior of CIC scheme to the tradition Interval Caching (IC) scheme, with respect to request accepted ratio and average servicing cost per stream.  相似文献   

10.
We present a new adaptive and energy-efficient broadcast dissemination model that supports flexible responses to client requests. In current broadcast dissemination models, clients must specify precisely what documents they require, and servers disseminate exactly those documents. This approach can be impractical, since in practice, clients may know the characteristics of the documents, but not the document names or IDs. In our model, clients specify the required document using attributes, and servers broadcast documents that match client requests at a prespecified level of similarity. A single document may satisfy several clients, so the server broadcasts a minimal set of documents that achieves a desired level of satisfaction in the client population. We introduce a mechanism for the server to obtain randomized feedback from clients to adapt its broadcast program to client needs. Finally, the server integrates a selective tune-in scheme based on approximate index matching to allow clients to conserve energy. Our simulation results show that our model captures client interest patterns efficiently and accurately and scales very well with the number of clients, while reducing overall client average waiting times. The selective tune-in scheme can considerably reduce the consumption of client energy with moderate waiting time overhead.  相似文献   

11.
张钢  刘凌峰  刘春贵 《微处理机》2005,26(4):13-15,19
本文讨论了关于在互联网上向大量用户分发流媒体内容的问题.在传统的客户机服务器架构之上,考虑在客户端请求过多时可能出现的服务器过载情况.本文提出的StreamCast系统是一种基于对等网的技术,它能够减轻服务器的负载.本文比较详细的讨论了所提出的解决方案,并且指出了一些有价值的研究问题.  相似文献   

12.
Multicast Video-on-Demand (VoD) systems are scalable and  cheap-to-operate. In such systems, a single stream is shared by a batch of common user requests. In this research, we propose multicast communication technique in an Enterprise Network where multimedia data are stored in distributed servers. We consider a novel patching scheme called Client-Assisted Patching where clients’ buffer of a multicast group can be used to patch the missing portion of the clients who will request the same movie shortly. This scheme significantly reduces the server load without requiring larger client cache space than conventional patching schemes. Clients can join an existing multicast session without waiting for the next available server stream which reduces service latency. Moreover, the system is more scalable and cost effective than similar existing systems. Our simulation experiment confirms all these claims.
Md. Humayun KabirEmail:
  相似文献   

13.
设计并实现了一个基于透明计算模式的I/O Server系统,I/O Server和I/O Client是一个在透明计算环境下,支持多操作系统远程启动和运行的网络存储访问服务I/O Manager的2个软件模块,I/O Server工作在服务器端,I/O Client工作在客户端。在透明计算模式中,各客户机硬件与操作系统分离,用户需要的操作系统的应用程序存储在服务器端。在客户机启动时,I/O Server和启动协议将I/O Client下载到端系统上运行,然后I/O Client向I/O Server发出I/O请求,I/O Server对收到的I/O请求加以分析,进行优先级分类,在优先级分时轮转调度I/O请求、操作服务器上的虚拟硬盘文件,并通过预取和缓存策略减少磁盘I/O操作,将处理结果返回给客户端,支持操作系统的远程启动,并为系统运行时的各种请求提供服务。  相似文献   

14.
Efficient schemes for broadcasting popular videos   总被引:4,自引:0,他引:4  
We provide a formal framework for studying broadcasting schemes and design a family of schemes for broadcasting popular videos, the greedy disk-conserving broadcasting (GDB) family. We analyze the resource requirements for GDB, i.e., the number of server broadcast channels, the client storage space, and the client I/O bandwidth required by GDB. Our analysis shows that all of our proposed broadcasting schemes are within a small factor of the optimal scheme in terms of the server bandwidth requirement. Furthermore, GDB exhibits a tradeoff between any two of the three resources. We compare our scheme with a recently proposed broadcasting scheme, skyscraper broadcasting (SB). With GDB, we can reduce the client storage space by as much as 50% or the number of server channels by as much as 30% at the cost of a small additional increase in the amount of client I/O bandwidth. If we require the client I/O bandwidth of GDB to be identical to that of SB, GDB needs only 70% of the client storage space required by SB or one less server channel than SB does. In addition, we show that with small client I/O bandwidth, the resource requirements of GDB are close to the minimum achievable by any disk-conserving broadcasting scheme.  相似文献   

15.
基于CORBA消息服务的容错机制研究   总被引:11,自引:0,他引:11  
郭长国  周明辉  贾焰  邹鹏 《计算机学报》2002,25(10):1059-1064
CORBA逐渐成为面向对象分布式应用中间件的主要标准,但是CORBA当前没有为容错提供相应的机制,该文在比较各种容错方法的基础上,讨论了一种基于异步消息服务回调和查询模型的容错方法,该方法使服务对象的副本可以并行地处理客户请求,提高了容错的性能,该方法具有对服务对象透明,可以满足用户不同容错要求等特点,文中还给出了这种方法在为遗留应用增加容错能力时的应用实例。  相似文献   

16.
本文针对VoD系统中不同客户对视频服务质量的不同要求,提出一种基于优先级的准入控制和带宽动态分配策略。在准入控制时,综合考虑请求的优先级和并发流占用的实际带宽等因素,在保证为高优先级请求预留较多固定带宽的同时提高并发流个数;在服务过程中,根据优先级和网络状况动态调整每个流的带宽,使丢包率低于一定的阈值,并保证在相同的网络状态下为高优先级请求提供较高的视频服务质量。  相似文献   

17.
A number of studies have focused on the design of continuous media, CM, (e.g., video and audio) servers to support the real-time delivery of CM objects. These systems have been deployed in local environments such as hotels, hospitals and cruise ships to support media-on-demand applications. They typically stream CM objects to the clients with the objective of minimizing the buffer space required at the client site. This objective can now be relaxed due to the availability of inexpensive storage devices at the client side. Therefore, we propose a Super-streaming paradigm that can utilize the client side resources in order to improve the utilization of the CM server. To support super-streaming, we propose a technique to enable the CM servers to deliver CM objects at a rate higher than their display bandwidth requirement. We also propose alternative admission control policies to downgrade super-streams in favor of regular streams when the resources are scarce. We demonstrate the superiority of our paradigm over streaming with both analytical and simulation models.Moreover, new distributed applications such as distant-learning, digital libraries, and home entertainment require the delivery of CM objects to geographically disbursed clients. For quality purposes, recently many studies proposed dedicated distributed architectures to support these types of applications. We extend our super-streaming paradigm to be applicable in such distributed architectures. We propose a sophisticated resource management policy to support super-streaming in the presence of multiple servers, network links and clients. Due to the complexity involved in modeling these architectures, we only evaluate the performance of super-streaming by a simulation study.  相似文献   

18.
A software architecture is presented that allows client application programs to interact with a DBMS server in a flexible and powerful way, using either direct, volatile messages, or messages sent via recoverable queues. Normal requests from clients to the server and replies from the server to clients can be transmitted using direct or recoverable messages. In addition, an application event notification mechanism is provided, whereby client applications running anywhere on the network can register for events, and when those events are raised, the clients are notified. A novel parameter passing mechanism allows a set of tuples to be included in an event notification. The event mechanism is particularly useful in an active DBMS, where events can be raised by triggers to signal running application programs. Received July 21, 1995 / Accepted May 30, 1996  相似文献   

19.
In order to guarantee continuous delivery of a video stream in an on-demand video server environment, a collection of resources (referred to as a logical channel) are reserved in advance. To conserve server resources, multiple client requests for the same video can be batched together and served by a single channel. Increasing the window over which all requests for a particular video are batched results in larger savings in server capacity; however, it also increases the reneging probability of a client. A complication introduced by batching is that if a batched client pauses, a new stream (which may not be immediately available) needs to be started when the client resumes. To provide short response time to resume requests, some channels are set aside and are referred to as contingency channels. To further improve resource utilization, even when a nonbatched client pauses, the channel is released and reacquired upon resume. In this paper, we first develop an analytical model that predicts the reneging probability and expected resume delay, and then use this model to optimally allocate channels for batching, on-demand playback, and contingency. The effectiveness of the proposed policy over a scheme with no contingency channels and no batching is also demonstrated.  相似文献   

20.
Video on demand services require video broadcast schemes to provide efficient and reliable performance under various client request loads. In this paper, we have developed an efficient request load adaptive broadcast scheme, speculative load adaptive streaming scheme (SLAS), that requires lower service bandwidth than previous approaches, regardless of request rate. We have provided both analysis and simulation to show the performance gain over previous schemes. In this paper, we provide the theoretic upper bound of the continuous segment allocations on channels. We found that the number of allocated segments of the SLAS is close to the theoretic upper bound when compared with other schemes over various numbers of stream channels. Our analysis of client waiting time is almost identical to simulation results about all client requests. By simulation, we compared the required service bandwidth and storage requirements of the SLAS scheme and other schemes and found the SLAS scheme is an efficient broadcast scheme as compared to well known seamless channel transition schemes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号