首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
随着大规模集成电路技术的发展,高清数字电视的SoC芯片的功能日益强大,集成了各种新的图像处理技术.高清电视SoC芯片对图像帧进行处理,提高图像的整体质量.介绍了DMA在高清数字电视中缓存图像数据过程中的应用.讲解了DMA操作的基本原理,介绍了图像数据在存储器中的存储方式,以及在DMA操作中所用到缓冲区的管理流程.  相似文献   

2.
张军  张德运 《计算机工程》2006,32(17):252-253
设计并实现了基于USB接口的低成本网络可视电话系统,描述了其内嵌的QoS控制机制。基于区分服务体系,对IP数据包的ToS字段进行不同的编码设置,以区别服务的优先级;采用双向链表结构的抖动缓冲区进行时延抖动的平滑,链表的节点位数据帧或空闲帧,二者可动态切换,输入线程和输出线程互斥地对其进行读写操作。实验证明,在多数应用场合下,系统的通信时延小于30ms,丢包率低于10%。  相似文献   

3.
刘志  张晶 《计算机工程》2014,(6):5-7,12
针对传统数据库缓冲池脏数据回写磁盘策略实时性与安全性差的问题,提出基于Hash算法与先入先出(FIFO)双向链表的数据库缓冲池脏数据回写磁盘实时调优策略。利用基于负载的调优策略创建多个内存FIFO队列链表,通过Hash算法将数据库缓冲区内的脏数据块按最后修改时间随机分配到不同队列负载中,实现FIFO队列链表的负载均衡,并利用全局时序约束将链表队列中的脏数据块分批回写磁盘,以解决传统脏数据回写磁盘策略系统资源消耗大与宕机后数据丢失风险高的问题。实验结果证明,该策略能提高脏数据回写的实时性及安全性,降低数据丢失率。  相似文献   

4.
主要采用双向链表及LRU(最少页面使用置换)算法来模拟数据库缓冲区中页面的换入换出情况,统计了I0次数,命中页数等信息.可以学到双向链表的使用方法及其在插入删除操作过程中速度上的优势、LRU算法的实现过程、MFC中的列表等控件的使用方法等.  相似文献   

5.
RS232接口电子设备向计算机提供数据信息时,VB for Windows通过一个数据缓冲区完成,小的缓冲区无法保证一组数据的正确存放,而大的缓冲区又会增加数据采集时间。从理论上分析可知,缓冲区的长度应以一组数据的最长字节数的2倍为最佳。这样,可以保证缓冲区中至少有一组完整的数据,而这组完整数据处于第1组数据的结束符之后。根据这一点,将第  相似文献   

6.
数据采集与数据缓冲区的设计   总被引:1,自引:0,他引:1  
计算机实时数据采集系统的设计必然涉及数据缓冲区的建立。本文讨论了单块、双块、环形数据缓冲区的设计及其控制,并将线性并列表、堆栈、队列等概念引入数据缓冲区的设计。  相似文献   

7.
线目标的缓冲区生成是缓冲区分析的基础和关键。结合栅格算法与矢量算法的优势,提出矢栅混合算法解决线目标的缓冲区生成问题。采用Douglas-Peuker方法对线目标进行重采样以加快缓冲区建立速度,用扫描线方法将线目标矢量数据转化为栅格形式,再采用膨胀原理生成缓冲区,通过扫描缓冲区栅格边界,提取有效矢量数据,进行求交运算,对缓冲区生成中的自相交多边形进行处理。  相似文献   

8.
提出一种直播服务器设计方案,媒体接收端根据节目链接访问媒体源,获取媒体数据流;媒体数据缓冲区是一个双循环缓冲区,循环缓存媒体接收端获取的媒体数据流;媒体服务器发送端根据媒体数据缓冲区建立子会话,生成访问该节目的直播服务器链接,终端通过链接访问直播服务器,直播服务器从媒体数据缓冲区中循环读取媒体数据发送给终端。设计方案具有适合多种网络、架设过程简单、可多台服务器级联等特点,并且成本低廉,可扩展性强。  相似文献   

9.
提出一种直播服务器设计方案,媒体接收端根据节目链接访问媒体源,获取媒体数据流;媒体数据缓冲区是一个双循环缓冲区,循环缓存媒体接收端获取的媒体数据流;媒体服务器发送端根据媒体数据缓冲区建立子会话,生成访问该节目的直播服务器链接,终端通过链接访问直播服务器,直播服务器从媒体数据缓冲区中循环读取媒体数据发送给终端。设计方案具有适合多种网络、架设过程简单、可多台服务器级联等特点,并且成本低廉,可扩展性强。  相似文献   

10.
刘淑芬  尧雪莉 《计算机仿真》2021,38(12):286-290
针对异构网络数据的缓冲存储效率低、分区存储精准度、运行代价高等问题,提出异构网络数据缓冲区替换算法的数学模型.首先将异构网络数据划分为多个数据阶层,确定数据格式,同时利用新的异构网络数据传输途径控制传输链路,对控制算法的窗口大小进行计算,以此梳理数据.最后将缓冲区分为冷区与热区两部分,分析算法的干扰因素,利用交替算法完成异构网络数据缓冲区替换算法的数学建模.仿真结果表明,基于异构网络数据缓冲区替换算法的数学模型可以有效提升算法的效率和精准度,同时可以降低算法运行代价.  相似文献   

11.
Anna Haé 《Acta Informatica》1993,30(2):131-146
This paper proposes performance and reliability improvement by using new algorithms for asynchronous operations in disk buffer cache memory. These algorithms allow for writing the files into the buffer cache by the processes and consider the number of active processes in the system and the length of the queue to the disk buffer cache. Writing the contents of the buffer cache to the disk depends on the system load and the write activity. Performance and reliability measures including the elapsed time of writing a file into the buffer cache, the waiting time to start writing a file, and the mean number of blocks written to the disk between system failures are used to show performance and reliability improvement by using the algorithms. Sensitivity analysis is used to influence the algorithms' design. Examples of real systems are used to show the numerical results of performance and reliability improvement in different systems with various disk cache parameters and file sizes.  相似文献   

12.
In this article, we analyze file access characteristics of smartphone applications and find out that a large portion of file data in smartphones are written only once. This specific phenomenon appears due to the behavior of SQLite, a lightweight database library used in most smartphone applications. Based on this observation, we present a new buffer cache management scheme for smartphone systems that considers non-reusability of write-only-once data that we observe. Buffer cache improves file access performances by maintaining hot data in memory thereby servicing subsequent requests without storage accesses. The proposed scheme classifies write-only-once data and aggressively evicts them from the buffer cache to improve cache space utilization. Experimental results with various real smartphone applications show that the proposed buffer cache management scheme improves the performance of smartphone buffer cache by 5%–33%. We also show that our scheme can reduce the buffer cache size to 1/4 of the original system without performance degradation, which allows the reduction of energy consumption in a smartphone memory system by 27%–92%.  相似文献   

13.
An SSD generally has a small memory, called cache buffer, to increase its performance and the frequently accessed data are maintained in this cache buffer. These cached data must periodically write back to the NAND Flash memory to prevent the data loss due to sudden power-off, and it should immediately flush all dirty data items into a non-volatile storage media (i.e., NAND Flash memory), when receiving a flush command, while the flush command is supported in Serial ATA (SATA) and Serial Attached SCSI (SAS). Thus, a flush command is an important factor to give significant impact on SSD performance.In this paper, we have investigated the impact of a flush command on SSD performance and have conducted in-depth experiments with versatile workloads, using the modified FlashSim simulator. Our performance measurements using PC and server workloads provide several interesting conclusions. First, a cache buffer without a flush command could improve SSD performance as a cache buffer size increases, since more requested data could be handled in the cache buffer. Second, our experiments have revealed that a flush command might give a negative impact on SSD performance. The average response time per request with a flush command is getting worse compared to not supporting the flush command, as cache buffer size increases. Finally, we have proposed the backend flushing scheme to nullify the negative performance impact of the flush command. The backend flushing scheme first writes the requested data into a cache buffer and sends the acknowledgment of the request completion to a host system. Then, it writes back the data in the cache buffer to NAND Flash memory. Thus, the proposed scheme could improve SSD performance since it might reduce the number of the dirty data items in a cache buffer to write back to NAND Flash memory.All these results suggest that a flush command could give a negative impact on SSD performance and our proposed backend flushing scheme could improve the SSD performance while supporting a flush command.  相似文献   

14.
Second-level buffer cache management   总被引:2,自引:0,他引:2  
Buffer caches are commonly used in servers to reduce the number of slow disk accesses or network messages. These buffer caches form a multilevel buffer cache hierarchy. In such a hierarchy, second-level buffer caches have different access patterns from first-level buffer caches because accesses to a second-level are actually misses from a first-level. Therefore, commonly used cache management algorithms such as the least recently used (LRU) replacement algorithm that work well for single-level buffer caches may not work well for second-level. We investigate multiple approaches to effectively manage second-level buffer caches. In particular, we report our research results in 1) second-level buffer cache access pattern characterization, 2) a new local algorithm called multi-queue (MQ) that performs better than nine tested alternative algorithms for second-level buffer caches, 3) a set of global algorithms that manage a multilevel buffer cache hierarchy globally and significantly improve second-level buffer cache hit ratios over corresponding local algorithms, and 4) implementation and evaluation of these algorithms in a real storage system connected with commercial database servers (Microsoft SQL server and Oracle) running industrial-strength online transaction processing benchmarks.  相似文献   

15.
随着移动终端深入人们的生活,移动社交APP得到了广泛使用。在移动社交APP中往往会使用大量的图片资源,如微信朋友圈、Instagram的图片分享等。在APP中浏览图片会消耗较多的网络流量,影响加载速度,因此大部分APP采用首先显示缩略图,根据用户需求再加载原图的策略。在服务器端也采用缓存技术来加快缩略图产生时间,减少磁盘I/O。但是,当前的缓存机制更多关注的是缓存的访问频率、最近访问时间等因素,并没有过多关注数据生成用户之间的社交关系,也没有考虑移动用户对缩略图和原图的不同访问模式。把缓存划分为两个部分:缩略图缓存区和原图缓存区,提出了基于社交关系的图片缓存替换算法,在传统缓存替换算法的基础上增加用户的社交关系以及缩略图和原图的关联关系,通过计算图片的缓存价值进行缓存替换。实验表明,所提出的基于社交关系的图片缓存替换算法对于缩略图和原图的缓存命中率都有明显提高。  相似文献   

16.
Recent results in the Rio project at the University of Michigan show that it is possible to create an area of main memory that is as safe as disk from operating system crashes. This paper explores how to integrate the reliable memory provided by the Rio file cache into a database system. Prior studies have analyzed the performance benefits of reliable memory; we focus instead on how different designs affect reliability. We propose three designs for integrating reliable memory into databases: non-persistent database buffer cache, persistent database buffer cache, and persistent database buffer cache with protection. Non-persistent buffer caches use an I/O interface to reliable memory and require the fewest modifications to existing databases. However, they waste memory capacity and bandwidth due to double buffering. Persistent buffer caches use a memory interface to reliable memory by mapping it into the database address space. This places reliable memory under complete database control and eliminates double buffering, but it may expose the buffer cache to database errors. Our third design reduces this exposure by write protecting the buffer pages. Extensive fault tests show that mapping reliable memory into the database address space does not significantly hurt reliability. This is because wild stores rarely touch dirty, committed pages written by previous transactions. As a result, we believe that databases should use a memory interface to reliable memory. Received January 1, 1998 / Accepted June 20, 1998  相似文献   

17.
针对在基于P2P的点播系统中,由于客户端缓存区没有得到高效的利用而影响流媒体点播系统的服务质量问题,提出了一种新的基于混合P2P的流媒体点播模型P2P_VOD,该模型将客户端缓存分为三个区,并详细阐述了客户端节点缓存区的缓存替换机制,综合考虑了数据块备份量的均衡性和节点VCR操作的命中率,使得节目数据块在各节点间缓存得到全局优化并有效缓解了服务器负载。通过仿真对比实验,验证了该模型在启动延迟和服务器负载方面的优越性。  相似文献   

18.
文中分析了独立冗余磁盘阵列的性能瓶颈,论述了在磁盘阵列中引人Cache的必要性,在分析了一种典型的Cache-buffer管理模块后,综合评述了优化Cache的几种途径,文章最后提出一种新的Cache管理策略。  相似文献   

19.
The cache memory consumes a large proportion of the energy used by a processor. In the on-chip cache, the translation lookaside buffer (TLB) accounts for 20–50% of energy consumption of the on-chip cache. To reduce energy consumption caused by TLB accesses, a virtual cache can be accessed by virtual addresses which are issued by a processor directly. However, a virtual cache may result in the synonym problem. In this paper, we propose low-cost synonym detection hardware and a synonym data coherence mechanism. These reduce the energy consumption incurred by TLB lookups, and maintain synonym data consistency in the virtual cache. The proposed synonym detection hardware efficiently reduces the number of blocks that must be looked up in a virtual cache for saving energy. In addition, the proposed synonym data coherence mechanism also reduces the number of invalidated blocks in the virtual cache to prevent the destruction of cache locality. The simulation results show that our proposed energy-aware virtual cache consumes 51%, 27%, and 20% less energy than the traditional physical cache, traditional virtual cache, and synonym lookaside buffer (SLB), respectively. In addition, our proposed design shows almost the same static energy consumption as SLB, and reduces static energy consumption by about 20% compared with the traditional physical cache and virtual cache.  相似文献   

20.
基于记录缓冲的低功耗指令Cache方案   总被引:1,自引:1,他引:1  
现代微处理器大多采用片上Cache来缓解主存储器与中央处理器(CPU)之间速度的巨大差异,但Cache也成为处理器功耗的主要来源,尤其是其中大部分功耗来自于指令Cache.采用缓冲器可以过滤掉大部分的指令Cache访问,从而降低功耗,但仍存在相当程度不必要的存储体访问,据此提出了一种基于记录缓冲的低功耗指令Cache结构RBC.通过记录缓冲器和对存储体的改造,RBC能够过滤大部分不必要的存储体访问,有效地降低了Cache的功耗.对10个SPEC2000标准测试程序的仿真结果表明,与传统基于缓冲器的Cache结构相比,在仅牺牲6.01%处理器性能和3.75%面积的基础上,该方案可以节省24.33%的指令Cache功耗.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号