首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
介绍了湿式离合器试验台缓冲控制系统的组成及工作原理,采用电液比例控制的方法,通过电液比例驱动电路驱动电液比例减压阀,实现对接合油压的控制.基于TLE7242-2G芯片设计了电液比例驱动电路,实现了PWM方波信号的输出控制.湿式离合器缓冲控制系统应用LabVIEW编写上位机控制软件,生成带有缓冲效果的控制信号,并且通过数据采集实现了闭环控制.通过实验数据验证,电液比例驱动电路输出信号稳定、准确,缓冲控制系统可有效用于湿式离合器缓冲控制.  相似文献   

2.
本文介绍了自动缓冲库的上位组态设计。通过传感器信号采集,PLC编程,组态王的监控来实现货料的自动识别和缓冲,传输等功能。  相似文献   

3.
如何提高数据库管理系统中缓冲机制的效率,是数据库管理系统必须面临和解决的一个难题。本文讨论了一种数据库管理系统中多线程异步缓冲机制的设计与实现思想,并以电子科技大学信息中心“八五”期间到最近开发的新型关系数据库管理系统CDBase为例介绍了其具体的实现方法。  相似文献   

4.
介绍了一个采用数据采集卡和信号调理电路组成的PC-DAQ数据采集系统,运用DMA数据传输技术、双缓冲技术、数据库技术以及模块化设计思想,实现了多参数连续实时数据采集。  相似文献   

5.
介绍了一个采用数据采集卡和信号调理电路组成的PC-DAQ数据采集系统,运用DMA数据传输技术、双缓冲技术、数据库技术以及模块化设计思想,实现了多参数连续实时数据采集.  相似文献   

6.
银河智能工具机是一个基于RISC、支持多种人工智能语言和传统C语言的通用智能计算机系统、数据缓冲部件DBU是该的一个重要组成部分,本文论述了DBU的功能、结构,对多处理机环境和对Prolog的支持以及对各类访问的处理。  相似文献   

7.
在某存储系统中,设计了一种高效的FAT32文件系统,其系统的实现采用了层次化设计思想,实现了物理实现层、缓冲层和文件实现层的数据交换,设计了服务函数接口,编写了服务函数实现流程图。该设计对硬盘的缓冲区采取较为先进的访问策略,这种策略非常有效,大大提高了文件系统的读写效率。最后对缓冲层和文件层进行了测试,测试结果证明访问效率有所提高,创建、查找、写文件和读文件功能正常,满足设计要求,为系统的后续研究打下了基础。  相似文献   

8.
基于双缓冲与环形缓冲的串行驱动设计   总被引:1,自引:0,他引:1  
介绍了基于双缓冲与环形缓冲技术的串行驱动设计,给出使用双缓冲与环形缓冲区实现高速发送或接收数据的设计思路及实现方法,运用此方法编写异步串行板WDM模式驱动。  相似文献   

9.
在某存储系统中,设计了一种高效的FAT32文件系统,其系统的实现采用了层次化设计思想,实现了物理实现层、缓冲层和文件实现层的数据交换,设计了服务函数接口,编写了服务函数实现流程图.该设计对硬盘的缓冲区采取较为先进的访问策略,这种策略非常有效,大大提高了文件系统的读写效率.最后对缓冲层和文件层进行了测试,测试结果证明访问效率有所提高,创建、查找、写文件和读文件功能正常,满足设计要求,为系统的后续研究打下了基础.  相似文献   

10.
介绍定点高性能低功耗数字信号处理器YHFT-X的指令缓冲队列与其控制器的设计.为实现向运算部件源源不断地输送高密度可变长的并行指令流,提出改进型动态管理循环缓冲队列的结构.该设计改善了现有处理循环指令技术的局限性,提出当功能单元充足时,利用循环缓冲队列实现的软件流水操作,大大减少了代码量,实现了循环体内指令的并行执行,同时减轻了取指令给存储器带来的压力.该结构支持分块指令预取技术,隐藏了部分流水线停顿.经验证及对比测试满足高性能、低功耗的应用要求.  相似文献   

11.
现代嵌入式处理器中指令高速缓存的功耗十分显著,对此提出一种基于路访问轨迹的组相联指令高速缓存的低功耗策略,利用改进的指令高速缓存和转移目标缓存建立和维护运行时指令高速缓存的路访问轨迹来减少指令高速缓存命中检测及无关路访问.进一步提出了基于跨行访问前驱指针、转移前驱状态、转移前驱指针及转移目标索引的路访问轨迹信息维护策略用以降低信息重建的频度,从而更有效地利用已建立的路访问轨迹信息.实验结果表明:采用优化后的路访问轨迹策略的指令高速缓存的标志存储器访问和数据存储器访问分别降低到传统指令高速缓存的3.60%和27.70%.  相似文献   

12.
The power consumed by memory systems accounts for 45% of the total power consumed by an embedded system, and the power consumed during a memory access is 10 times higher than during a cache access. Thus, increasing the cache hit rate can effectively reduce the power consumption of the memory system and improve system performance. In this study, we increased the cache hit rate and reduced the cache-access power consumption by developing a new cache architecture known as a single linked cache (SLC) that stores frequently executed instructions. SLC has the features of low power consumption and low access delay, similar to a direct mapping cache, and a high cache hit rate similar to a two way-set associative cache by adding a new link field. In addition, we developed another design known as a multiple linked caches (MLC) to further reduce the power consumption during each cache access and avoid unnecessary cache accesses when the requested data is absent from the cache. In MLC, the linked cache is split into several small linked caches that store frequently executed instructions to reduce the power consumption during each access. To avoid unnecessary cache accesses when a requested instruction is not in the linked caches, the addresses of the frequently executed blocks are recorded in the branch target buffer (BTB). By consulting the BTB, a processor can access the memory to obtain the requested instruction directly if the instruction is not in the cache. In the simulation results, our method performed better than selective compression, traditional cache, and filter cache in terms of the cache hit rate, power consumption, and execution time.  相似文献   

13.
An SSD generally has a small memory, called cache buffer, to increase its performance and the frequently accessed data are maintained in this cache buffer. These cached data must periodically write back to the NAND Flash memory to prevent the data loss due to sudden power-off, and it should immediately flush all dirty data items into a non-volatile storage media (i.e., NAND Flash memory), when receiving a flush command, while the flush command is supported in Serial ATA (SATA) and Serial Attached SCSI (SAS). Thus, a flush command is an important factor to give significant impact on SSD performance.In this paper, we have investigated the impact of a flush command on SSD performance and have conducted in-depth experiments with versatile workloads, using the modified FlashSim simulator. Our performance measurements using PC and server workloads provide several interesting conclusions. First, a cache buffer without a flush command could improve SSD performance as a cache buffer size increases, since more requested data could be handled in the cache buffer. Second, our experiments have revealed that a flush command might give a negative impact on SSD performance. The average response time per request with a flush command is getting worse compared to not supporting the flush command, as cache buffer size increases. Finally, we have proposed the backend flushing scheme to nullify the negative performance impact of the flush command. The backend flushing scheme first writes the requested data into a cache buffer and sends the acknowledgment of the request completion to a host system. Then, it writes back the data in the cache buffer to NAND Flash memory. Thus, the proposed scheme could improve SSD performance since it might reduce the number of the dirty data items in a cache buffer to write back to NAND Flash memory.All these results suggest that a flush command could give a negative impact on SSD performance and our proposed backend flushing scheme could improve the SSD performance while supporting a flush command.  相似文献   

14.
The L1 cache in today’s high-performance processors accesses all ways of a selected set in parallel. This constitutes a major source of energy inefficiency: at most one of the N fetched blocks can be useful in an N-way set-associative cache. The other N-1 cachelines will all be tag mismatches and subsequently discarded.We propose to eliminate unnecessary associative fetches by exploiting certain software semantics in cache design, thus reducing dynamic power consumption. Specifically, we use memory region information to eliminate unnecessary fetches in the data cache, and ring level information to optimize fetches in the instruction cache. We present a design that is performance-neutral, transparent to applications, and incurs a space overhead of mere 0.41% of the L1 cache.We show significantly reduced cache lookups with benchmarks including SPEC CPU, SPECjbb, SPECjAppServer, PARSEC, and Apache. For example, for SPEC CPU 2006, the proposed mechanism helps to reduce cache block fetches from the data and instruction caches by an average of 29% and 53% respectively, resulting in power savings of 17% and 35% in the caches, compared to the aggressively clock-gated baselines.  相似文献   

15.
Linux文件系统数据缓冲区的分析研究   总被引:2,自引:1,他引:1  
文章深入研究了Linux文件系统的数据缓冲区管理,包括数据缓冲区的整体结构、数据缓冲区采用的数据结构和实现方法。  相似文献   

16.
随着移动终端深入人们的生活,移动社交APP得到了广泛使用。在移动社交APP中往往会使用大量的图片资源,如微信朋友圈、Instagram的图片分享等。在APP中浏览图片会消耗较多的网络流量,影响加载速度,因此大部分APP采用首先显示缩略图,根据用户需求再加载原图的策略。在服务器端也采用缓存技术来加快缩略图产生时间,减少磁盘I/O。但是,当前的缓存机制更多关注的是缓存的访问频率、最近访问时间等因素,并没有过多关注数据生成用户之间的社交关系,也没有考虑移动用户对缩略图和原图的不同访问模式。把缓存划分为两个部分:缩略图缓存区和原图缓存区,提出了基于社交关系的图片缓存替换算法,在传统缓存替换算法的基础上增加用户的社交关系以及缩略图和原图的关联关系,通过计算图片的缓存价值进行缓存替换。实验表明,所提出的基于社交关系的图片缓存替换算法对于缩略图和原图的缓存命中率都有明显提高。  相似文献   

17.
《Computer》1973,6(3):30-36
A cache-based computer system employs a fast, small memory -the " cache" - interposed between the usual processor and main memory. At any given time the cache contains as much as possible the instructions and data the processor needs; as new information is needed it is brought from main memory to cache, displacing old information. The processor tends to operate with a memory of cache speed but with main memory cost-per-bit. This configuration has analogies with other systems employing memory hierarchies, such as "paging" or "virtual memory" systems. In contrast with these latter, a cache is managed by hardware algorithms, deals with smaller blocks of data (32 bytes, for example, rather than 4096), provides a smaller ratio of memory access times (5:1 rather than 1000: 1), and, because of the last factor, holds the processor idle while blocks of data are being transferred from main memory to cache rather than switching to another task. These are important differences, and may suffice to make the cache-based system cost effective in many situations where paging is not.  相似文献   

18.
文中分析了独立冗余磁盘阵列的性能瓶颈,论述了在磁盘阵列中引人Cache的必要性,在分析了一种典型的Cache-buffer管理模块后,综合评述了优化Cache的几种途径,文章最后提出一种新的Cache管理策略。  相似文献   

19.
CACHE技术是现代计算机存储体系结构中普遍使用的一种重要技术。文章通过分析了合一CACHE和分离CACHE的特点,发现如果能解决合一CACHE中同时进行取指和存取数而引起的资源冲突问题,则能够更好地利用CACHE的功能。因此提出了一种VLIW体系中在合一CACHE的基础上增加一个填充指令BUF的方案,并从其指令界面的支持,硬件结构,和编译支持三个方面对该方案进行了阐述。并通过实例和实验数据证明它能够较有效地解决同时取指和取数的资源冲突问题。  相似文献   

20.
Li  Jianjiang  Deng  Zhaochu  Du  Panpan  Lin  Jie 《The Journal of supercomputing》2022,78(4):4779-4798

The Sunway TaihuLight is the first supercomputer built entirely with domestic processors in China. On Sunway Taihulight, the local data memory (LDM) of the slave core is limited, so data transmission with the main memory is frequent during calculation, and the memory access efficiency is low. On the other hand, for many scientific computing programs, how to solve the storage problem of irregular access data is the key to program optimization. Software cache (SWC) is one of the effective means to solve these problems. Based on the characteristics of Sunway TaihuLight structure and irregular access, this paper designs and implements a new software cache structure by using part of the space in LDM to simulate the cache function, which uses new cache address mapping and conflicts solution to solve high data access overhead and storage overhead in a traditional cache. At the same time, the SWC uses the register communication between the slave cores to share on the different slave core LDMs, increasing the capacity of the software cache and improving the hit rate. In addition, we adopt a double buffer strategy to access regular data in batches, which hides the communication overhead between the slave core and the main memory. The test results on the Sunway TaihuLight platform show that the software cache structure in this paper can effectively reduce the program running time, improve the software cache hit rate, and achieve a better optimization effect.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号