首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
基于状态机的SDRAM控制器的设计与实现   总被引:8,自引:0,他引:8  
现代计算机的基本框架仍是以冯·诺伊曼结构为基础,以中央控制单元和存储指令/数据的存储器之间的通信为支撑的。同步动态随机存储器(即SDRAM)与静态RAM相比具有容量大,成本低的优势;与传统异步DRAM相比其速度更快,所以得到了越来越广泛的应用。因此以简化主机对SDRAM访问为主要任务的SDRAM控制器的设计就变得更加重要。论文提出了一种基于状态机的SDRAM控制器的设计思路与实现,并通过了FPGA验证,完全达到系统的功能和速度要求。  相似文献   

2.
《Micro, IEEE》2002,22(4)
The Rambus standardization skullduggery saga continues. SDRAM technology licensor Rambus sued chipmaker Infineon for patent infringement because Infineon refused to take a license under Rambus' patents. (SDRAMs are synchronous dynamic random-access memory chips. Instead of running asynchronously (like ordinary DRAMs), SDRAMs are refreshed by a synchronous system clock. By 1999, SDRAM had largely replaced asynchronous DRAM.) Infineon then countersued for common-law fraud based on Rambus' alleged abuse of the standard-setting process. After a trial in which the judge assessed $7 million in damages against Rambus, the company appealed to the Federal Circuit appeals court. After its recent hearing of the opposing arguments, the Federal Circuit will probably take at least six months to hand down an opinion. In June 2002, the Federal Trade Commission weighed in by suing Rambus for engaging in unfair competition, in violation of section 5 of the FTC Act  相似文献   

3.
The MPC105 peripheral component interconnection bridge/memory controller provides a platform-specification-compliant bridge between Power PC microprocessors and the PCI bus. With it, designers can create systems using peripherals already designed for a variety of standard PC interfaces. This bridge chip also integrates a secondary cache controller and high-performance memory controller that supports DRAM or synchronous DRAM and ROM or flash ROM  相似文献   

4.
DDR SDRAM存储器已经得到广泛的应用。本文详细分析了DDR SDRAM控制器的结构和关键技术,并介绍了基于Altera FPGA的DDR SDRAM控制器实现。我们在深入分析DDR存储控制器工作原理及其内部结构后,直接使用Altera公司提供的IP核,在QuartusⅡ5.0开发环境中调用MegaCore(Altera公司的IPcore),根据具体应用需求进行了DDR SDRAM控制器的设计并加以实现。  相似文献   

5.
《Computer》2002,35(12):42-46
Over the past 50 years, technological innovations have continually driven down computer storage costs while dramatically increasing performance and value-and the trend shows no signs of slowing. The article considers various forms of storage including: random access memory; asynchronous and synchronous DRAM; RDRAM; SDRAM; flash memory; tape; RAID storage; hard disk; floppy drives; and CDs and DVDs.  相似文献   

6.
Diamond  S.L. 《Micro, IEEE》1996,16(6):74-75
In 1987 it was becoming apparent that computer systems would need to communicate at data rates beyond the capabilities of then current approaches. To take on this challenge, the IEEE Futurebus group formed a new group dedicated to building a fast, coherent interface, scalable from the smallest computer to the largest mainframe. The IEEE officially established the SCI (Scalable Coherent Interface) working group, referenced as IEEE P1596, in 1988. Today the SCI is IEEE Std 1596. It has several derivatives that enhance its operation and meet additional needs such as real-time operations: a differential voltage specification (IEEE 1536.3), a point-to-point memory interface (IEEE 1596.4), and SyncLink (IEEE P1596.7). SyncLink is a new, high-speed memory interface that combines synchronous dynamic RAM (SDRAM), developed in the late 1980s, and the IEEE's RamLink interface, developed in 1990. SyncLink uses the earlier interface's packet protocol as a base but eliminates its point-to-point link to reduce latency. The new interface uses SDRAM's internal pipelining and control for efficient bus utilization. It allows access in the lowest latency possible for a very high bandwidth DRAM. Future plans include data rates of 800 Mbits per pin per second in 1998, to double in the early 2000s. This rate will allow systems to operate at full capability, even when running 3D graphics  相似文献   

7.
由于闪存介质在航空航天数据存储中的优势,逐渐成为DRAM、SDRAM之后的下一代卫星存储解决方案。但由于闪存的存储特性,目前缺乏对数据存储的有效管理,星载闪存文件系统面临诸多有待研究的问题。通过分析星载环境下卫星存储特征,给出星载环境下的数据管理需求,并实验模拟发现在 IOPS(每秒进行读写的次数)、元数据、吞吐量、随机寻址四种性能指标上,F2FS优于BTRFS、exFAT、NILFS这三种类闪存文件系统;结合具体测试,分析了闪存内部并行对于文件系统的影响。论证得到F2FS较适合对基于闪存的星载存储进行数据管理,并结合数据管理需求提出了适用于星载环境下的几点改进方案,为星载环境下闪存文件系统设计和实现提供了研究基础。  相似文献   

8.
DDRSDRAM具有集成度高、密度大、接口带宽高、价格便宜的特点。目前已广泛应用于PC、服务器等产品和嵌入式系统中。本文介绍了基于AlteraCycloneⅡ的DDR控制器设计和CycloneⅡ的接口以及Alter提供的控制器IP核,并讨论了根据自身应用特点简化IP核中不必要的功能,替换加密的IP核,从而实现自己的DDR控制器。该措施保证了此项目开发的速度和质量。  相似文献   

9.
当海量数据请求访问异构内存系统时,异构内存页在动态随机存储器(dynamic random access memory,DRAM)和非易失性存储器(non-volatile memory,NVM)之间进行频繁的往返迁移.然而,应用于传统内存页的迁移策略难以适应内存页"冷""热"度的快速动态变化,这使得从DRAM迁移至N...  相似文献   

10.
The traditional dynamic random-access memory (DRAM) storage medium can be integrated on chips via modern emerging 3D-stacking technology to architect a DRAM shared cache in multicore systems. Compared with static random-access memory (SRAM), DRAM is larger but slower. In the existing research, a lot of work has been devoted to improving the workload performance using SRAM and stacked DRAM together in shared cache systems, ranging from SRAM structure improvement to optimizing cache tags and data access. However, little attention has been paid to designing a shared cache scheduling scheme for multiprogrammed workloads with different memory footprints in multicore systems. Motivated by this, we propose a hybrid shared cache scheduling scheme that allows a multicore system to utilize SRAM and 3D-stacked DRAM efficiently, thus achieving better workload performance. This scheduling scheme employs (1) a cache monitor, which is used to collect cache statistics; (2) a cache evaluator, which is used to evaluate the cache information during the process of programs being executed; and (3) a cache switcher, which is used to self-adaptively choose SRAM or DRAM shared cache modules. A cache data migration policy is naturally developed to guarantee that the scheduling scheme works correctly. Extensive experiments are conducted to evaluate the workload performance of our proposed scheme. The experimental results showed that our method can improve the multiprogrammed workload performance by up to 25% compared with state-of-the-art methods (including conventional and DRAM cache systems).  相似文献   

11.
相变存储器(PCM)由于其非易失性、高读取速度以及低静态功耗等优点,已成为主存研究领域的热点.然而,目前缺乏可用的PCM设备,这使得基于PCM的算法研究得不到有效验证.因此,本文提出了利用主存模拟器仿真并验证PCM算法的思路.本文首先介绍了现有主存模拟器的特点,并指出其并不能完全满足当前主存研究的实际需求,在此基础上提出并构建了一个基于DRAM和PCM的混合主存模拟器.与现有模拟器的实验比较结果表明,本文设计的混合主存模拟器能够有效地模拟DRAM和PCM混合存储架构,并能够支持不同形式的混合主存系统模拟,具有高可配置性.最后,论文通过一个使用示例说明了混合主存模拟器编程接口的易用性.  相似文献   

12.
介绍了SDRAM及其基于FPGA的通用控制器在PDP视频存储系统中的应用。详细解析了SDRAM的控制命令所实现的功能以及控制时序。所提出的通用控制器不针对特定存储操作,适用于任何复杂系统。用户可以根据自己的需要自行提供简易的顶层的控制命令,就可以对SDRAM进行正确的操作。同时也介绍了SDRAM通用控制器设计。在PDP视频系统中,我们采用两组SDRAM,通过合理的状态机运用进行乒乓操作,将处理过的数据不间断送往SDRAM和PDP平板。  相似文献   

13.
Modern processors such as Tilera’s Tile64, Intel’s Nehalem, and AMD’s Opteron are migrating memory controllers (MCs) on-chip, while maintaining a large, flat memory address space. This trend to utilize multiple MCs will likely continue and a core or socket will consequently need to route memory requests to the appropriate MC via an inter- or intra-socket interconnect fabric similar to AMD’s HyperTransportTM, or Intel’s Quick-Path InterconnectTM. Such systems are therefore subject to non-uniform memory access (NUMA) latencies because of the time spent traveling to remote MCs. Each MC will act as the gateway to a particular region of the physical memory. Data placement will therefore become increasingly critical in minimizing memory access latencies. Increased competition for memory resources will also increase the memory access latency variation in future systems. Proper allocation of workload data to the appropriate MC will be important in decreasing the variation and average latency when servicing memory requests. The allocation strategy will need to be aware of queuing delays, on-chip latencies, and row-buffer hit-rates for each MC. In this paper, we propose dynamic mechanisms that take these factors into account when placing data in appropriate slices of physical memory. We introduce adaptive first-touch page placement, and dynamic page-migration mechanisms to reduce DRAM access delays for multi-MC systems. We also introduce policies that can handle data placement in memory systems that have regions with heterogeneous properties. The proposed policies yield average performance improvements of 6.5% for adaptive first-touch page-placement, and 8.9% for a dynamic page-migration policy for a system with homogeneous DRAM DIMMs. We also show improvements in systems that contain DIMMs with different performance characteristics.  相似文献   

14.
Product Profile     
《Computer》1975,8(9):75-75
Floppy Disk Drive: FD360 micro-peripheral floppy disk drive operates under directions from Intel or National Semiconductor microprocessor system. Hardware interfaces and FDOS (Floppy Disk Operating Systems) available for Intellec-8, Intellec-8/Mod-80, IMP-16P, 16L, 8P. Features include format compatibility with IBM 3741, 3742, 3540 systems, built-in hardware track seek and seek verification, automatic head load/unload, operation with programmed I/O or DMA interfaces, sector buffering to enable asynchronous programmed I/O. Eight input, 16 output lines provide interfacing. Single drive configuration, $2350 (unit); 2 drives, $3000. Special interfaces available. – iCOM, Canoga Park, CA.  相似文献   

15.
Book Review     
《Computer》1975,8(10):93-93
Floppy Disk Drive: FD360 micro-peripheral floppy disk drive operates under directions from Intel or National Semiconductor microprocessor system. Hardware interfaces and FDOS (Floppy Disk Operating Systems) available for Intellec-8, Intellec-8/Mod-80, IMP-16P, 16L, 8P. Features include format compatibility with IBM 3741, 3742, 3540 systems, built-in hardware track seek and seek verification, automatic head load/unload load, operation with programmed I/O or DMA interfaces, sector buffering to enable asynchronous programmed I/O. Eight input, 16 output lines provide interfacing. Single drive configuration, $2350 (unit); 2 drives, $3000. Special interfaces available. – iCOM, Canoga Park, CA.  相似文献   

16.
A DRAM (dynamic RAM) with an on-chip cache, called the cache DRAM, has been proposed and fabricated. It is a hierarchical RAM containing a 1-Mb DRAM for the main memory and an 8-kb SRAM (static RAM) for cache memory. It uses a 1.2-μm CMOS technology. Suitable for no-wait-state memory access in low-end workstations and personal computers, the chip also serves high-end systems as a secondary cache scheme. It is shown how the cache DRAM bridges the gap in speed between high-performance microprocessor units and existing DRAMs. The cache DRAM concept is explained, and its architecture is presented. The error checking and correction scheme used to improve the cache DRAM's reliability is described. Performance results for an experimental device are reported  相似文献   

17.
随着近年来高速计算机的快速发展,人们对存储器频宽及性能的要求越来越高。作为第2代DDR存储器的DDR2 SDRAM具有高速、低功耗、高密度、高稳定性等特点,在未来的一二年里,它将逐步取代DDR SDRAM而成为内存的主流。尽管DDR2的地位正在不断上升,但DDR仍是当前流行的高速存储器。该文通过对这两种存储器的分析比较,基于WISHBONE总线,提出并实现了一种可兼容DDR与 DDR2存储器的控制器。  相似文献   

18.
Non-volatile memory(NVM)provides a scalable and power-efficient solution to replace dynamic random access memory(DRAM)as main memory.However,because of the relatively high latency and low bandwidth of NVM,NVM is often paired with DRAM to build a heterogeneous memory system(HMS).As a result,data objects of the application must be carefully placed to NVM and DRAM for the best performance.In this paper,we introduce a lightweight runtime solution that automatically and transparently manages data placement on HMS without the requirement of hardware modifications and disruptive change to applications.Leveraging online profiling and performance models,the runtime solution characterizes memory access patterns associated with data objects,and minimizes unnecessary data movement.Our runtime solution effectively bridges the performance gap between NVM and DRAM.We demonstrate that using NVM to replace the majority of DRAM can be a feasible solution for future HPC systems with the assistance of a software-based data management.  相似文献   

19.
Memory diagnostics are important to improving the resilience of DRAM main memory. As bit cell size reaches physical limits, DRAM memory will be more likely to suffer both transient and permanent errors. Memory diagnostics that operate online can be a component of a comprehensive strategy to allay errors. This paper presents a novel approach, Asteroid, to integrate online memory diagnostics during workload execution. The approach supports diagnostics that adapt at runtime to workload behavior and resource availability to maximize test quality while reducing performance overhead. We describe Asteroid’s design and how it can be efficiently integrated with a hierarchical memory allocator in modern operating systems. We also present how the framework enables control policies to dynamically configure a diagnostic. Using an adaptive policy, in a 16-core server, Asteroid has modest overhead of 1–4 % for workloads with low to high memory demand. For these workloads, Asteroid’s adaptive policy has good error coverage and can thoroughly test memory.  相似文献   

20.
Hybrid memory systems composed of dynamic random access memory (DRAM) and Non-volatile memory (NVM) often exploit page migration technologies to fully take the advantages of different memory media. Most previous proposals usually migrate data at a granularity of 4 KB pages, and thus waste memory bandwidth and DRAM resource. In this paper, we propose Mocha, a non-hierarchical architecture that organizes DRAM and NVM in a flat address space physically, but manages them in a cache/memory hierarchy. Since the commercial NVM device–Intel Optane DC Persistent Memory Modules (DCPMM) actually access the physical media at a granularity of 256 bytes (an Optane block), we manage the DRAM cache at the 256-byte size to adapt to this feature of Optane. This design not only enables fine-grained data migration and management for the DRAM cache, but also avoids write amplification for Intel Optane DCPMM. We also create an Indirect Address Cache (IAC) in Hybrid Memory Controller (HMC) and propose a reverse address mapping table in the DRAM to speed up address translation and cache replacement. Moreover, we exploit a utility-based caching mechanism to filter cold blocks in the NVM, and further improve the efficiency of the DRAM cache. We implement Mocha in an architectural simulator. Experimental results show that Mocha can improve application performance by 8.2% on average (up to 24.6%), reduce 6.9% energy consumption and 25.9% data migration traffic on average, compared with a typical hybrid memory architecture–HSCC.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号