首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
External mergesort is normally implemented so that each run is stored continuously on disk and blocks of data are read exactly in the order they are needed during merging. We investigate two ideas for improving the performance of external mergesort: interleaved layout and a new reading strategy. Interleaved layout places blocks from different runs in consecutive disk addresses. This is done in the hope that interleaving will reduce seek overhead during merging. The new reading strategy precomputes the order in which data blocks are to be read according to where they are located on disk and when they are needed for merging. Extra buffer space makes it possible to read blocks in an order that reduces seek overhead, instead of reading them exactly in the order they are needed for merging. A detailed simulation model was used to compare the two layout strategies and three reading strategies. The effects of using multiple work disks were also investigated. We found that, in most cases, interleaved layout does not improve performance, but that the new reading strategy consistently performs better than double buffering and forecasting  相似文献   

2.
Supporting continuous media data-such as video and audio-imposes stringent demands on the retrieval performance of a multimedia server. In this paper, we propose and evaluate a set of data placement and retrieval algorithms to exploit the full capacity of the disks in a multimedia server. The data placement algorithm declusters every object over all of the disks in the server-using a time-based declustering unit-with the aim of balancing the disk load. As for runtime retrieval, the quintessence of the algorithm is to give each disk advance notification of the blocks that have to be fetched in the impending time periods, so that the disk can optimize its service schedule accordingly. Moreover, in processing a block request for a replicated object, the server will dynamically channel the retrieval operation to the most lightly loaded disk that holds a copy of the required block. We have implemented a multimedia server based on these algorithms. Performance tests reveal that the server achieves very high disk efficiency. Specifically, each disk is able to support up to 25 MPEG-1 streams. Moreover, experiments suggest that the aggregate retrieval capacity of the server scales almost linearly with the number of disks  相似文献   

3.
视频服务器中视频流存储分配算法的研究   总被引:10,自引:1,他引:10  
飞速发展的网络和存储技术使为大量的用同时提供视频点播功能的VOD系统成为可能,目前已有一批这样的系统问题,但这些系统为了保证一定数目的并发点播流,都预留了大量的设备资源。  相似文献   

4.
NES-Join算法是一种无需外排序的连接运算算法,其复杂性优于经典的Sort-Merge Join算法及改进后的SDC-Join算法.在NES-Join算法基础上提出一种改进算法,该算法能够有效压缩原算法中未匹配记录暂存块中的空记录信息,从而使NES-Join算法更具实用性.通过实验和分析表明,改进后的NES-Join算法与原算法时间复杂性相当,但显著提高了磁盘空间的使用率.  相似文献   

5.
This paper presents a new data placement scheme for continuous-media playback via a scalable storage system. Multimedia contents are segmented into data blocks for the purpose of being stored, retrieved, and manipulated. If these data blocks belong to some continuous media, then they must be handled in a timely manner, for example, being retrieved before some deadline. One of the main challenges in implementing the above system is the simultaneous retrieval of a great number of different media streams from a very large storage system. The proposed scheme efficiently reduces the seeking delay by a very simple placement method and a retrieval scheduler. Thus, both the storage capacity and the number of concurrent accesses to the storage are scalable. The performance of the proposed scheme is evaluated through a simple analytical model and a practical prototype implementation.  相似文献   

6.
Grouped sweeping scheduling for DASD-based multimedia storage management   总被引:2,自引:0,他引:2  
This paper presents a new formulation of DASD (direct access storage device) disk arm scheduling schemes for multimedia storage management. The formulation, referred to as grouped sweeping scheduling (GSS), provides a framework for minimizing the buffer space required in the retrieval of multimedia streams. In GSS, the set ofn media streams that are to be served concurrently is divided intog groups. Groups are served in fixed order and streams within each group are served in an elevator-like SCAN scheme. Hence, the fixed order (FIFO) and SCAN schemes are special cases of GSS wheng=n andg=1, respectively. In this paper an optimization problem is formulated in which the buffer requirement is minimized with respect to the two design parameters:g and the size of the service unit, i.e. the number of blocks accessed in each service cycle. This formulation initially assumes that all media streams have the same playout requirements. A procedure of complexityO(1) is developed in computing the optimum solution to this problem. The proof of optimality and comparisons between the optimized GSS scheme and FIFO and SCAN are also presented. The paper also discusses the effect of disk arrays in the GSS formulation and issues related to operating GSS in a dynamic setting where streams arrive and depart in random order. Finally, the GSS scheme is extended to support heterogeneous media streams where each stream may have its own playout requirement.This paper extends earlier results that were presented at NOSDAV'92 [1] and Multimedia'93 [2].  相似文献   

7.
A number of recent technological trends have made data intensive applications such as continuous media (audio and video) servers a reality. These servers store and retrieve large volumes of data using magnetic disks. Servers consisting of multiple nodes and large arrays of heterogeneous disk drives have become a fact of life for several reasons. First, magnetic disks might fail. Failed disks are almost always replaced with newer disk models because the current technological trend for these devices is one of annual increase in both performance and storage capacity. Second, storage requirements are ever increasing, forcing servers to be scaled up progressively. In this study, we present a framework to enable parity-based data protection for heterogeneous storage systems and to compute their mean lifetime. We describe the tradeoffs associated with three alternative techniques: independent subservers, dependent subservers, and disk merging. The disk merging approach provides a solution for systems that require highly available secondary storage in environments that also necessitate maximum flexibility.  相似文献   

8.
Today many media of information storage device are formed as disks. Hence, next generation removable data storage media are shaped as disk types too. The holographic data storage system also uses a disk type photopolymer media. And then, holographic data storage system is most advanced optical memory system. Tracking servo and tilt servo control are very important research in holographic data storage system. In this paper, we propose intelligent servo control by fuzzy rules in holographic data storage system. Hence, we have found pattern of tilt servo control in holographic data storage system through fuzzy system and genetic algorithm. Fuzzy rules were generated by genetic algorithm for controlling tilt servo. Therefore, we control tilt servo using fuzzy rules in holographic data storage system. Consequently, Image input–output patterns of tilt servo control was found by intelligence algorithm in holographic data storage system.  相似文献   

9.
The sequencing of requests in an automated storage and retrieval system was the subject of many studies in literature. However, these studies assumed that the locations of items to be stored and retrieved are known and the sequencing problem consisted in determining a route of minimal travel time between these locations. In reality, for a retrieval request, an item can be in multiple locations of the rack and so there is a set of locations associated with this item and not only one predetermined location in the rack. In this paper, we deal with the sequencing problem where a required product can be in several rack locations and there is a set of empty locations. Consequently, the retrieval and storage locations are not known a priori. We sequence by the minimum travel time of a double cycle (DC). An optimization method working step-by-step is developed to determine for each DC and according to storage and retrieval requests, the location of the item to be stored and the location of the item to be retrieved allowing the minimum DC time. The storage requests are processed in FCFS and retrieval requests retrievals requests are gathered by block according to wave sequencing.  相似文献   

10.
The sequencing of requests in an automated storage and retrieval system was the subject of many studies in literature. However, these studies assumed that the locations of items to be stored and retrieved are known and the sequencing problem consisted in determining a route of minimal travel time between these locations. In reality, for a retrieval request, an item can be in multiple locations of the rack and so there is a set of locations associated with this item and not only one predetermined location in the rack. In this paper, we deal with the sequencing problem where a required product can be in several rack locations and there is a set of empty locations. Consequently, the retrieval and storage locations are not known a priori. We sequence by the minimum travel time of a double cycle (DC). An optimization method working step-by-step is developed to determine for each DC and according to storage and retrieval requests, the location of the item to be stored and the location of the item to be retrieved allowing the minimum DC time. The storage requests are processed in FCFS and retrieval requests retrievals requests are gathered by block according to wave sequencing.  相似文献   

11.
基于光盘库的Hadoop分布式文件系统(HDFS光盘库)在单位存储成本、数据安全性、使用寿命等方面非常符合当前大数据存储要求,但是HDFS不适合存储大量小文件和实时数据读取。为了使HDFS光盘库能更好地运用到更多大数据存储场景,本文提出一种更加适合大数据存储的磁光虚拟存储系统(MOVS, Magneto-optical Virtual Storage System)。系统在HDFS光盘库与用户之间加入磁盘缓存,并在磁盘缓存内通过文件标签分类、虚拟存储、小文件合并等技术将磁盘缓存内小文件合并为适合HDFS光盘库存储的大文件,提高系统的数据传输速度。系统还使用了文件预取、缓存替换等文件调度算法对磁盘缓存内文件进行动态更新,减少用户访问HDFS光盘库次数。实验结果表明,MOVS相对HDFS光盘库在响应时间和数据传输速度方面得到很大改善。  相似文献   

12.
We propose a efficient writeback scheme that enables guaranteeing throughput in high-performance storage systems. The proposed scheme, called de-fragmented writeback (DFW), reduces positioning time of storage devices in writing workloads, and thus enables fast writeback in storage systems. We consider both of storage media in designing DFW scheme; traditional rotating disk and emerging solid-state disks. First, sorting and filling holes methods are used for rotating disk media for the higher throughput. The scheme converts fragmented data blocks into sequential ones so that it reduces the number of write requests and unnecessary disk-head movements. Second, flash block aware clustering-based writeback scheme is used for solid-state disks considering the characteristics of flash memory. The experimental results show that our schemes guarantee system’s high throughput while guaranteeing data reliability.  相似文献   

13.
区块链以分布式账本的形式存储交易数据,其节点通过存储哈希链来持有当前数据的副本。由于区块链链式结构的特殊性,区块的数量会随着时间推移不断增加,节点承受的存储压力也随之增大,因此存储扩展性成为区块链发展的瓶颈之一。针对该问题,提出了一种基于中国剩余定理(CRT)的区块链存储扩展模型。模型将区块链分为高安全性区块和低安全性区块,并对它们采取不同的存储策略。其中,低安全性区块以全网保存(所有节点都需保存)的形式进行存储,高安全性区块被基于CRT的分割算法分片后以分布式的形式进行存储。此外,利用冗余余数系统(RRNS)的错误检测与纠正来防止恶意节点攻击,进而提高数据稳定性和完整性。实验结果与安全性分析表明,所提模型在具有安全性、容错性的同时保障了数据的完整性,还能有效地减少节点的存储消耗,增强区块链系统的存储扩展性。  相似文献   

14.
Random Duplicated Assignment (RDA) is an approach in which video data is stored by assigning a number of copies of each data block to different, randomly chosen disks. It has been shown that this approach results in smaller response times and lower disk and RAM costs compared to the well-known disk stripping techniques. Based on this storage approach, one has to determine, for each given batch of data blocks, from which disk each of the data blocks is to be retrieved. This is to be done in such a way that the maximum load of the disks is minimized. The problem is called the Retrieval Selection Problem (RSP). In this paper, we propose a new efficient algorithm for RSP. This algorithm is based on the breadth-first search approach and is able to guarantee optimal solutions for RSP in O(n2+mn), where m and n correspond to the number of data blocks and the number of disks, respectively. We will show that our proposed algorithm has a lower time complexity than an existing algorithm, called the MFS algorithm.  相似文献   

15.
Storage allocation policies for time-dependent multimedia data   总被引:5,自引:0,他引:5  
Multimedia computing requires support for heterogeneous data types with differing storage, communication and delivery requirements. Continuous media data types, such as audio and video, impose delivery requirements that are not satisfied by conventional physical storage organizations. In this paper, we describe a physical organization for multimedia data based on the need to support the delivery of multiple playout sessions from a single rotating-disk storage device. Our model relates disk characteristics to the different media recording and playback rates and derives their storage pattern. This storage organization guarantees that as long as a multimedia delivery process is running, starvation will never occur. Furthermore, we derive bandwidth and buffer constraints for disk access and present an approach to minimize latencies for non-continuous media stored on the same device. The analysis and numerical results indicate the feasibility of using conventional rotating magnetic disk storage devices to support multiple sessions for video-on-demand applications  相似文献   

16.
尹洋  刘振军  许鲁 《软件学报》2009,20(10):2752-2765
随着计算规模越来越大,网络存储系统应用领域越来越广泛,对网络存储系统I/O性能要求也越来越高.在存储系统高负载的情况下,采用低速介质在客户机和网络存储系统的I/O路径上作为数据缓存也变得具有实际的意义.设计并实现了一种基于磁盘介质的存储系统块一级的缓存原型D-Cache.采用两级结构对磁盘缓存进行管理,并提出了相应的基于块一级的两级缓存管理算法.该管理算法有效地解决了因磁盘介质响应速度慢而带来的磁盘缓存管理难题,并通过位图的使用消除了磁盘缓存写Miss时的Copy on Write开销.原型系统的测试结果表明,在存储服务器高负载的情况下,缓存系统能够有效地提高系统的整体性能.  相似文献   

17.
云数据存储的快速发展对数据的可用性提出了较高要求.目前,主要采用纠删码计算数据编码块进行分布式冗余数据存储来保证数据的可用性.虽然这种数据编码技术保证了存储数据的安全性并减少了额外的存储空间,但在损坏数据恢复时会产生较大的计算和通信开销提出一种基于多级网络编码的多副本生成和损坏数据恢复算法算法基于多级网络编码对纠删码的...  相似文献   

18.
Video server needs a storage system with large bandwidth in order to provide concurrently more users with the real time retrieval requests for video streams. So, the storage system generally has the structure of disk array, which consists of multiple disks. When the storage system serves multiple video stream requests, it's bottlenecks come from the seeking delay caused by the random movement of disk head and from unbalanced disk access due to disk load unbalance among multiple disks.This paper presents a novel placement and retrieval policy. The new policy retrieves the requested data through sequential movement of disk heads and maintaining disk load balance so that it can diminish the bottlenecks on retrieving and can provide the concurrent real time retrieval services for more users simultaneously. In addition, the novel policy reduces the startup latency for the requests. The correctness of the novel placement and retrieval policy is analyzed with theoretical views. Performance analysis of the novel placement and retrieval policy is provided with simulations.  相似文献   

19.
《Parallel Computing》1997,23(12):1743-1755
This paper addresses the design problems concerning a large-scale, parallel video-on-demand server that consists of multiple clusters of nodes connected by a high performance interconnection network. In order to efficiently control the flow of video streams, we propose two scheduling algorithms for data retrieval and communication. First, we present a disk scheduling algorithm called round scheduling which fully utilizes disk bandwidth, minimizing the disk idle time while the server retrieves data blocks. Second, a communication scheduling algorithm is developed to guarantee conflict-free communication over the multistage interconnection network that is topologically equivalent to the Omega network. We also show some simulation results on the server configuration. Analysis of tradeoffs between the server utilization and the start-up latency helps to determine the proper number and size of server clusters for a set of given nodes.  相似文献   

20.
为了解决PC存储性能瓶颈问题,有必要针对文件大小利用闪存固态盘和硬盘进行自动分层存储,找出临界点作为判断分层的依据。通过实验抽样国内几种PC机,用前序遍历算法统计出文件大小分布情况。实验结果表明,PC机文件大小集中分布在64KB及以下。闪存固态盘对64KB以下文件存储性能远高于硬盘,64KB适合作为临界点。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号