首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
在网络带宽不对称的移动实时环境中,数据广播是一种有效的数据访问方式。针对这种网络特性,分析了现今已经存在的某些广播调度算法。针对UFO算法,分别提出了SBS算法和CRS算法,它们从服务器、移动客户端两个方面进行了改进。两种算法可以根据给定的数据项访问概率分布,自动生成广播调度。通过理论分析和实验结果表明,该算法不会产生事务重启,并且可以有效减少数据的访问时间,使用户访问数据广播的平均等待时间最小。  相似文献   

2.
We study the broadcast scheduling problem in which clients send their requests to a server in order to receive some files available on the server. The server may be scheduled in a way that several requests are satisfied in one broadcast. When files are transmitted over computer networks, broadcasting the files by fragmenting them provides flexibility in broadcast scheduling that allows the optimization of per user response time. The broadcast scheduling algorithm, then, is in charge of determining the number of segments of each file and their order of transmission in each round of transmission. In this paper, we obtain a closed form approximation formula which approximates the optimal number of segments for each file, aiming at minimizing the total response time of requests. The obtained formula is a function of different parameters including those of underlying network as well as those of requests arrived at the server. Based on the obtained approximation formula we propose an algorithm for file broadcast scheduling which leads to total response time which closely conforms to the optimum one. We use extensive simulation and numerical study in order to evaluate the proposed algorithm which reveals high accuracy of obtained analytical approximation. We also investigate the impact of various headers that different network protocols add to each file segment. Our segmentation approach is examined for scenarios with different file sizes at the range of 100 KB to 1 GB. Our results show that for this range of file sizes the segmentation approach shows on average 13% tolerance from that of optimum in terms of total response time and the accuracy of the proposed approach is growing by increasing file size. Besides, using proposed segmentation in this work leads to a high Goodput of the scheduling algorithm.  相似文献   

3.
Mobile computers can be equipped with wireless communication devices that enable users to access data services from any location. In wireless communication, the server-to-client (downlink) communication bandwidth is much higher than the client-to-server (uplink) communication bandwidth. This asymmetry makes the dissemination of data to client machines a desirable approach. However, dissemination of data by broadcasting may induce high access latency in case the number of broadcast data items is large. We propose two methods aiming to reduce client access latency of broadcast data. Our methods are based on analyzing the broadcast history (i.e., the chronological sequence of items that have been requested by clients) using data mining techniques. With the first method, the data items in the broadcast disk are organized in such a way that the items requested subsequently are placed close to each other. The second method focuses on improving the cache hit ratio to be able to decrease the access latency. It enables clients to prefetch the data from the broadcast disk based on the rules extracted from previous data request patterns. The proposed methods are implemented on a Web log to estimate their effectiveness. It is shown through performance experiments that the proposed rule-based methods are effective in improving the system performance in terms of the average latency as well as the cache hit ratio of mobile clients.  相似文献   

4.
Shih-Chiang  Yuan-Cheng  Le-Chi  Ying-Dar   《Computer Networks》2008,52(18):3392-3404
Scheduling packets is a usual solution to allocate the bandwidth on a bottleneck link. However, this solution cannot be used to manage the downlink bandwidth at the user-side access gateway, since the traffic is queued at the ISP-side gateway but not the user-side gateway. An idea is scheduling the requests at the user-side gateway to control the amount of the responses queued in the ISP-side gateway. This work first investigates the possibility of applying the class-based fair queuing discipline, which was widely and maturely used in scheduling packets, to schedule requests. However, we found that simply applying this discipline to schedule requests would encounter the timing and ordering problems at releasing requests and may not satisfy high-class users. Thus, we propose a minimum-service first request scheduling (MSF-RS) scheme. MSF-RS always selects the next request from the class receiving the minimum service to provide user-based weighted fairness, which ensures more bandwidth for high-class users. Next, MSF-RS uses a window-based rate control on releasing requests to maintain full link utilization and reduce the user-perceived latency. The results of analysis, simulation and field trial demonstrate that MSF-RS provides fairness while reducing 23–30% of user-perceived latency on average. Besides, a MSF-RS gateway can save 25% of CPU loading.  相似文献   

5.
Broadcast disk technique has been often used to disseminate frequently requested data efficiently to a large volume of mobile clients over wireless channels. In broadcast disk environments, a server often broadcasts different data items with differing frequencies to reflect the skewed data access patterns of mobile clients. Previously proposed concurrency control methods for mobile transactions in wireless broadcast environments are focused on the mobile transactions with uniform data access patterns. These protocols perform poorly in broadcast disk environments where the data access patterns of mobile transactions are skewed. In broadcast disk environments, the time length of a broadcast cycle usually becomes large to reflect the skewed data access patterns. This will often cause read-only transactions to access old data items rather than the latest data items. Furthermore, updating mobile transactions will be frequently aborted and restarted in the final validation stage due to the update conflict of the same data items with high access frequencies. This problem will increase the average response time of the update mobile transactions and waste the uplink communication bandwidth. In this paper, we extend the existing FBOCC concurrency control method to efficiently handle mobile transactions with skewed data access patterns in broadcast disk environments. Our method allows read-only transactions to access the more updated data, and reduces the average response time of updating transactions through early aborts and restarts. Our method also reduces the amount of uplink communication bandwidth for the final validation of the update transactions. We present an in-depth experimental analysis of our method by comparing with existing concurrency control protocols. Our performance analysis shows that it significantly decreases the average response time and the amount of uplink bandwidths over existing methods.  相似文献   

6.
In this paper, we study an on-line broadcast scheduling problem with deadlines, in which the requests asking for the same page can be satisfied simultaneously by broadcasting this page, and every request is associated with a release time, deadline and a required page with a unit size. The objective is to maximize the number of requests satisfied by the schedule. In this paper, we focus on an important special case where all the requests have their spans (the difference between release time and deadline) less than 2. We give an optimal online algorithm, i.e., its competitive ratio matches the lower bound of the problem.  相似文献   

7.
Disk scheduling in video editing systems   总被引:2,自引:0,他引:2  
Modern video servers support both video-on-demand and nonlinear editing applications. Video-on-demand servers enable the user to view video clips or movies from a video database, while nonlinear editing systems enable the user to manipulate the content of the video database. Applications such as video and news editing systems require that the underlying storage server be able to concurrently record live broadcast information, modify prerecorded data, and broadcast an authored presentation. A multimedia storage server that efficiently supports such a diverse group of activities constitutes the focus of this study. A novel real-time disk scheduling algorithm is presented that treats both read and write requests in a homogeneous manner in order to ensure that their deadlines are met. Due to real-time demands of movie viewing, read requests have to be fulfilled within certain deadlines; otherwise, they are considered lost. Since the data to be written into disk is stored in main memory buffers, write requests can be postponed until critical read requests are processed. However, write requests still have to be processed within reasonable delays and without the possibility of indefinite postponement. This is due to the physical constraint of the limited size of the main memory write buffers. The new algorithm schedules both read and write requests appropriately, to minimize the amount of disk reads that do not meet their presentation deadlines, and to avoid indefinite postponement and large buffer sizes in the case of disk writes. Simulation results demonstrate that the proposed algorithm offers low violations of read deadlines, reduces waiting time for lower priority disk requests, and improves the throughput of the storage server by enhancing the utilization of available disk bandwidth  相似文献   

8.
非对称网络环境中数据广播的启发式多盘调度算法   总被引:18,自引:0,他引:18  
在以无线网络为代表的非对称网络环境中,数据广播是一种有效的数据访问方式。针对非均匀的访问概率分布,我们分析了数据广播访问时间的最优值,并提出了一种启发式多盘调度算法(HMD),该算法能够根据给定的数据项访问概率分布,自动生成广播调度。 欠的理论分析和实验结果表明,HMD算法是一种高效的数据广播调度算法,具有接近于理论最优值的性能,并且具有良好的可操作性。  相似文献   

9.
Data broadcast is an attractive data dissemination method in mobile environments. To improve energy efficiency, existing air indexing schemes for data broadcast have focused on reducing tuning time only, i.e., the duration that a mobile client stays active in data accesses. On the other hand, existing broadcast scheduling schemes have aimed at reducing access latency through nonflat data broadcast to improve responsiveness only. Not much work has addressed the energy efficiency and responsiveness issues concurrently. This paper proposes an energy-efficient indexing scheme called MHash that optimizes tuning time and access latency in an integrated fashion. MHash reduces tuning time by means of hash-based indexing and enables nonflat data broadcast to reduce access latency. The design of hash function and the optimization of bandwidth allocation are investigated in depth to refine MHash. Experimental results show that, under skewed access distribution, MHash outperforms state-of-the-art air indexing schemes and achieves access latency close to optimal broadcast scheduling.  相似文献   

10.
Edmonds  Pruhs 《Algorithmica》2008,36(3):315-330
Abstract. We investigate server scheduling policies to minimize average user perceived latency in pull-based client-server systems (systems where multiple clients request data from a server) where the server answers requests on a multicast/ broadcast channel. We first show that there is no O(1) -competitive algorithm for this problem. We then give a method to convert any nonclairvoyant unicast scheduling algorithm A to nonclairvoyant multicast scheduling algorithm B . We show that if A works well, when jobs can have parallel and sequential phases, then B works well if it is given twice the resources. More formally, if A is an s -speed c -approximation unicast algorithm, then its counterpart algorithm B is a 2s -speed c -approximation multicast algorithm. It is already known [5] that Equi-partition, which devotes an equal amount of processing power to each job, is a (2 + ε) -speed O(1 + 1/ε) -approximation algorithm for unicast scheduling of such jobs. Hence, it follows that the algorithm {BEQUI}, which broadcasts all requested files at a rate proportional to the number of outstanding requests for that file, is a (4 + ε) -speed O(1 + 1/ε) -approximation algorithm. We give another algorithm BEQUI-EDF and show that BEQUI-EDF is also a (4 + ε) -speed O(1 + 1/ε) -approximation algorithm. However, BEQUI-EDF has the advantage that the maximum number of preemptions is linear in the number of requests, and the advantage that no preemptions occur if the data items have unit size.  相似文献   

11.
In this paper, we propose and study a dynamic approach to schedule real-time requests in a video-on-demand (VOD) server. Providing quality of service in such servers requires uninterrupted and on-time retrieval of motion video data. VOD services and multimedia applications further require access to the storage devices to be shared among multiple concurrent streams. Most of the previous VOD scheduling approaches use limited run-time,0 information and thus cannot exploit the potential capacity of the system fully. Our approach improves throughput by making use of run-time information to relax admission control. It maintains excellent quality of service under varying playout rates by observing deadlines and by reallocating resources to guarantee continuous service. It also reduces start-up latency by beginning service as soon as it is detected that deadlines of all real-time requests will be met. We establish safe conditions for greedy admission, dynamic control of disk read sizes, fast initial service, and sporadic services. We conduct thorough simulations over a wide range of buffer capacities, load settings, and over varying playout rates to demonstrate the significant improvements in quality of service, throughput and start-up latency of our approach relative to a static approach.  相似文献   

12.
Video server needs a storage system with large bandwidth in order to provide concurrently more users with the real time retrieval requests for video streams. So, the storage system generally has the structure of disk array, which consists of multiple disks. When the storage system serves multiple video stream requests, it's bottlenecks come from the seeking delay caused by the random movement of disk head and from unbalanced disk access due to disk load unbalance among multiple disks.This paper presents a novel placement and retrieval policy. The new policy retrieves the requested data through sequential movement of disk heads and maintaining disk load balance so that it can diminish the bottlenecks on retrieving and can provide the concurrent real time retrieval services for more users simultaneously. In addition, the novel policy reduces the startup latency for the requests. The correctness of the novel placement and retrieval policy is analyzed with theoretical views. Performance analysis of the novel placement and retrieval policy is provided with simulations.  相似文献   

13.
14.
Recent technology advances have made multimedia on-demand services, such as home entertainment and home-shopping, important to the consumer market. One of the most challenging aspects of this type of service is providing access either instantaneously or within a small and reasonable latency upon request. We consider improvements in the performance of multimedia storage servers through data sharing between requests for popular objects, assuming that the I/O bandwidth is the critical resource in the system. We discuss a novel approach to data sharing, termed adaptive piggybacking, which can be used to reduce the aggregate I/O demand on the multimedia storage server and thus reduce latency for servicing new requests.  相似文献   

15.
随着处理器和存储器速度差距的不断拉大,访存指令尤其是频繁cache miss的指令成为影响性能的重要瓶颈。编译器由于无法得知访存指令动态执行的拍数,一般假定这些指令的延迟为cache命中或者cache miss的延迟,所以并不准确。我们引入cache profiling技术来收集访存指令运行时的cache miss或者命中的信息,利用这些信息来计算访存的延迟。乱序机器上硬件的指令调度对于发射窗口内的指令能进行很好的动态调度,编译器则对更长的范围内的指令调度更有优势。在reorder buffer中cache miss一旦发生,容易引起reorder buffer满,导致流水线阻塞。调度容易cache miss的指令。使其并行执行,从而隐藏cache miss的长延迟,就可以提高程序性能。因此,我们针对load指令,一方面修改频繁miss的指令的延迟,一方面修改调度策略,提高存储级并行度。实验证明,我们的调度对于bzip2有高达4.8%的提升,art有4%的提升,整体平均提高1.5%。  相似文献   

16.
In a ubiquitous environment, there are many applications where a server disseminates information of common interest to pervasive clients and devices. For an example, an advertisement server sends information from a broadcast server to display devices. We propose an efficient information scheduling scheme for information broadcast systems to reduce average waiting time for information access while maintaining fairness between information items. Our scheme allocates information items adaptively according to relative popularity for each local server. Simulation results show that our scheme can reduce the waiting time up to 30% compared with the round robin scheme while maintaining cost‐effective fairness. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

17.
根据多媒体处理单元的访存特点,提出一种面向高性能多媒体SoC的分组访存调度算法.该算法将访存请求按照访存ID和页地址分组,以访存组为单位进行乱序调度,并通过维护相同ID访存请求之间的顺序保证访存的正确性:综合考虑访存单元的访存效率和服务质量要求,在每个访存单元独立的调度周期内提供最低带宽保障服务.将该分组访存调度算法应用于访存调度装置,实际应用仿真结果表明,与已有基于带宽分配的访存调度算法相比,文中算法在保障访存单元带宽需求的同时降低了访存延迟,并将平均带宽利用率提高了15%.  相似文献   

18.
The congestion of packet forwarding between a source and destination is challenging on downlink transmission in the entire file (ex. Audio and Video). Whenever file is been uploaded to the server, a user requests for file where server transmits it without knowledge of user's bandwidth, which is a major, cause of packet loss or time duration in the receiver end. To accumulate the better solution, Enhanced and Optimal Path Scheduling Approach (EOPSA) designs to find optimal path scheduling for multimedia data transmission in multimedia sensor network over cloud server using IoT devices. EOPSA studied the multisource video-on-demand streaming in multimedia sensor networks. The method introduced a heuristic distributed protocol to find optimal route for multimedia data transmissions. Efficient way to identify the bandwidth before the transmission ensures link establishment between sender and receiver. Here, the capture of bandwidth helps to check user's system capability to forward requested media data. Based on experiment evaluation, EOPSA improves 0.20 packet delivery ratio, 130 throughput, 0.20 second average delay and 14 communication overhead for 15, 25, 50, 75, and 100 nodes compared than conventional methods.  相似文献   

19.
结合访存失效队列状态的预取策略   总被引:1,自引:0,他引:1  
随着存储系统的访问速度与处理器的运算速度的差距越来越显著,访存性能已成为提高计算机系统性能的瓶颈.通过对指令Cache和数据Cache失效行为的分析,提出一种预取策略--结合访存失效队列状态的预取策略.该预取策略保持了指令和数据访问的次序,有利于预取流的提取.并将指令流和数据流的预取相分离,避免相互替换.在预取发起时机的选择上,不但考虑当前总线是否空闲,而且结合访存失效队列的状态,减小对处理器正常访存请求的影响.通过流过滤机制提高预取准确性,降低预取对访存带宽的需求.结果表明,采用结合访存失效队列状态的预取策略,处理器的平均访存延时减少30%,SPEC CPU2000程序的IPC值平均提高8.3%.  相似文献   

20.
Multimedia streaming applications can consume a significant amount of server and network resources. Periodic broadcast and patching are two approaches that use multicast transmission and client buffering in innovative ways to reduce server and network load, while at the same time allowing asynchronous access to multimedia streams by a large number of clients. Current research in this area has focussed primarily on the algorithmic aspects of these approaches, with evaluation performed via analysis or simulation. In this paper, we describe the design and implementation of a flexible streaming video server and client test bed that implements both periodic broadcast and patching, and explore the issues that arise when implementing these algorithms using laboratory and internet-based test beds. We present measurements detailing the overheads associated with the various server components (signaling, transmission schedule computation, data retrieval and transmission), the interactions between the various components of the architecture, and the overall end-to-end performance. We also discuss the importance of an appropriate server application-level caching policy for reducing the needed disk bandwidth at the server. We conclude with a discussion of the insights gained from our implementation and experimental evaluation.Subhabrata Sen: The work of this author was conducted while he was at the University of Massachusetts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号