共查询到19条相似文献,搜索用时 187 毫秒
1.
显示卡、软硬盘控制卡和网卡的安装,使内存空间相对减少,而应用软件的大型化又需要占用更多的有限内存空间。文章在介绍了内存及常驻内存程序的基本知识后,提出了解决这一矛盾的一些方法。 相似文献
2.
嵌入式操作系统μC/OS-Ⅱ的一种内存管理算法 总被引:1,自引:1,他引:0
针对μc/OS-Ⅱ内存管理机制的不足,提出了一种新的内存管理算法.较小的内存分成固定大小的内存块,用位图索引组织;较大的内存用链表组织.实验表明,该方法能较好地提高内存分配速度和利用率,特别是对于内存块大小变化很大的系统. 相似文献
3.
内存数据库的数据结构分析 总被引:11,自引:0,他引:11
数据库管理系统获得高性能的一个方法就是将数据库存放在内存中而不是存放在磁盘上 ,这样就必须设计出一种新的数据结构和算法来更有效地利用 CPU周期和内存空间。本文介绍了几种适合于内存数据库的物理数据组织方法和当前内存数据库管理系统的几种索引结构 相似文献
4.
5.
6.
7.
探讨嵌入式开发对内存管理的基本要求、嵌入式开发内存管理的关键问题以及给出一种Vxworks内存管理方案,即把除VxWorks系统保留内存以外的内存分为三种类型进行管理:固定大小的缓冲池、动态可变的堆以及由各种固定大小的缓冲区组成的队列。 相似文献
8.
内存在计算机中地位是很重要的,如果内存质量不好,将直接影响计算机的性能、甚至造成机器不能正常使用。目前市场上主流内存的价格一般在30O-800元不等,并且是“鱼龙混杂”,在这里有必要介绍一些购买内存时应注意的问题。型号冒充:一般是指用普通DRAM冒充EDO-RAM或用168线的EDO-RAM冒充168线的SDRAM。普通DRAM与EDORAM都是72线的,形态上并无太大的差别,只是产品型号不一样,故而不易分辨。我们可以通过两种方法加以判别。第一,普通DRAM产品号最后两位数字一般为“00”,而EDO-RAM产品号最后两位一般是“03、04… 相似文献
9.
一种分布式磁盘缓存的设计与实现 总被引:1,自引:0,他引:1
为了提高系统10性能,针对松散耦合环境下,高带宽、低延迟的传输特点,提出了一种通过扩展存储器层次结构来提高系统10性能的分布式系统——分布式磁盘缓存系统DRACO。通过利用分布式环境中空闲内存节点的剩余内存,扩充系统整体缓存容量,减小访问磁盘数据的频度,最终达到提高系统整体性能的目的。 相似文献
10.
随着互联网和云计算技术的迅猛发展,现有动态随机存储器(Dynamic Random Access Memory,DRAM)已无法满足一些实时系统对性能、能耗的需求.新型非易失存储器(Non-Volatile Memory,NVM)的出现为计算机存储体系的发展带来了新的契机.本文针对NVM和DRAM混合内存系统架构,提出一种高效的混合内存页面管理机制.该机制针对内存介质写特性的不同,将具有不同访问特征的数据页保存在合适的内存空间中,以减少系统的迁移操作次数,从而提升系统性能.同时该机制使用一种两路链表使得NVM介质的写操作分布更加均匀,以提升使用寿命.最后,本文在Linux内核中对所提机制进行仿真实验.并与现有内存管理机制进行对比,实验结果证明了所提方法的有效性. 相似文献
11.
Hadoop云存储架构的设计初衷是实现大文件的高效存储处理,但在处理移动终端下诸如图片等小文件时会引起名称节点索引提取速度过慢和数据节点存储空间利用率不高等问题.针对这一问题,提出一种小文件归档的方案FHAR.方案综合考虑移动终端用户访问的实时性、名称节点服务器内存使用率、数据节点存储空间利用率等方面,利用双层索引的归档技术结合FAHP(模糊多属性决策理论)的系统负载预测算法实现系统的负载均衡,提高服务效率.同时利用数据预取机制对访问操作进行优化.仿真结果表明,该方案有效提高了节点的存储效率与用户访问的实时体验性. 相似文献
12.
铁电存储单元的设计和测试 总被引:1,自引:0,他引:1
基于被应用于实际设计之中的统一的铁电器件模型,详细讨论了2T 2C组态的铁电破坏性读出存储器单元的设计。在此基础上,设计和制造了分立元件的单元测试电路。通过与普通电容的对比实验,证实了铁电破坏性读出随机读取存储器与普通随机读取存储器不同的工作原理和模式。进而获得了被测FRAM单元的特性波形和铁电材料存储特性的有关数据。这些工作为进一步进行大规模铁电存储器的研究作了准备。 相似文献
13.
14.
15.
Stack Size Minimization for Embedded Real-Time Systems-on-a-Chip 总被引:1,自引:0,他引:1
The primary goal for real-time kernel software for single and multiple-processor on a chip systems is to support the design of timely and cost effective systems. The kernel must provide time guarantees, in order to predict the timely behaviorof the application, an extremely fast response time, in order not to waste computing power outside of the application cycles and save as much RAM space as possible in order to reduce the overall cost of the chip. The research on real-time software systems has produced algorithms that allow to effectively schedule system resources while guaranteeing the deadlines of the application and to group tasks in a very small number of non-preemptive sets which require much less RAM memory for stack. Unfortunately, up to now the research focus has been on time guarantees rather than on the optimization of RAM usage.Furthermore, these techniques do not apply to multiprocessor architectures which are likely to be widely used in future microcontrollers. This paper presents innovative scheduling and optimization algorithms that effectively solve the problem of guaranteeing schedulability with an extremely little operating system overhead and minimizing RAM usage. We developed a fast and simple algorithm for sharing resources in multiprocessor systems, together with an innovative procedure for assigning a preemption threshold to tasks. These allow the use of a single user stack. The experimental part shows the effectiveness of a simulated annealing-based tool that allows to find a schedulable system configuration starting from the selection of a near-optimal task allocation. When used in conjunction with our preemption threshold assignment algorithm, our tool further reduces the RAM usage in multiprocessor systems. 相似文献
16.
The Asynchronous Transfer Mode (ATM) is considered to be a key technology for B-ISDN. This paper discusses VLSI trends and how VLSI's can be applied to realize ATM switching node systems for B-ISDN. Implementing a practical ATM node system will require the development of technologies such as high-throughput ATM switch LSI's with up to 10 Gb/s capacity and SDH termination technology based on optical fiber transmission. An ATM traffic-handling mechanism with Quality of Service (QoS) controls such as ATM layer performance monitoring, virtual channel handling, usage parameter control, and VP shaping requires several hundred thousand logic gates and several megabytes of high-speed static RAM; VLSI's must be introduced if such mechanisms are to be implemented. ATM node system architecture is based on design principles of a building-block-type structure and hierarchical multiplexing. The basic ATM call handling module, the AHM, is composed mainly of a line termination block and a self-routing switch block; we analyzed this module from the viewpoint of the amount of hardware it requires. Finally, future ATM node systems are discussed on the basis of 0.2-μm VLSI development trends and hardware requirements such as the need for ultrahigh integration of logic gate with memory, multichip modules, and low power dissipation technology 相似文献
17.
As DRAM technology is facing scalability limitations due to its excessive leakage power in nano-scale technologies, various non-volatile memory technologies have been emerged to replace it in memory hierarchy. Among these technologies, Phase Change Memory (PCM) is a promising technology for main memory due to its near-zero leakage power, higher density, non-volatility and soft error immunity. However, its major drawbacks, including high write energy and limited write endurance, have prevented its usage as a drop-in replacement of DRAM technology. In this paper, we propose a technique to swap data between memory lines with goal of reducing bit flips. The proposed swapping technique finds the best place to write a chunk of data among a limited set of lines to minimize number of bit flips. The proposed swapping operation works online i.e, does not require any data profiling. Moreover, it does not require major modifications of existing solutions and works only by the addition of a proposed circuitry. It is remarkable that, this technique is additive to various other architectures aiming at PCM lifetime enhancement. Experimental results carried out on a quad core CMP system show that the proposed technique prolongs PCM main memory lifetime by 48% which is achieved at the price of 1% and 2% overhead in read and write latencies respectively. 相似文献
18.
Quantum Cellular Automata (QCA) is a novel and attractive method which enables designing and implementing high-performance and low-power consumption digital circuits at nano-scale. Since memory is one of the most applicable basic units in digital circuits, having a fast and optimized QCA-based memory cell is remarkable. Although there are some QCA structures for a memory cell in the literature, however, QCA characteristics may be used in designing a more optimized memory cell than blindly modeling CMOS logics in QCA. In this paper, two improved structures have been proposed for a loop-based Random Access Memory (RAM) cell. In the proposed methods, the inherent capabilities of QCA, such as the programmability of majority gate and the clocking mechanism have been considered. The first proposed method enjoys smaller number of cells and the wasted area has been reduced compared to traditional loop-based RAM cell. For the second proposed method, the memory access time has been duplicated in presence of smaller number of cells. Irregular placement of QCA cells in a QCA layout makes its realization troublesome. So, we have proposed alternative versions of the proposed methods that exploit regularity of clock zones in design and have compared them to each other. QCA designer has been employed for simulation of the proposed designs and proving their validity. 相似文献
19.
We study a multistage hierarchical asynchronous transfer mode (ATM) switch in which each switching element has its own local cell buffer memory that is shared among all its output ports. We propose a novel buffer management technique called delayed pushout that combines a pushout mechanism (for sharing memory efficiently among queues within the same switching element) and a backpressure mechanism (for sharing memory across switch stages). The backpressure component has a threshold to restrict the amount of sharing between stages. A synergy emerges when pushout, backpressure, and this threshold are all employed together. Using a computer simulation of the switch under symmetric but bursty traffic, we study delayed pushout as well as several simpler pushout and backpressure schemes under a wide range of loads. At every load level, we find that the delayed pushout scheme has a lower cell loss rate than its competitors. Finally, we show how delayed pushout can be extended to share buffer space between traffic classes with different space priorities 相似文献