首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 115 毫秒
1.
An 800-MHz embedded DRAM macro employs a memory cell utilizing a device from the 90-nm high-performance technology menu; a 2.2-nm gate oxide 1.5 V IO device. A concurrent refresh mode is designed to improve the memory utilization to over 99% for a 64 /spl mu/s data retention time. A concurrent refresh scheduler utilizes up-count and down-count registers to identify at least one array to be refreshed at every clock cycle, emulating a classical distributed refresh mode. A command multiplier employs low frequency phased clock signals to generate the clock, commands, and addresses at rates up to 4/spl times/ that of the tester frequency. The macro integrates masked redundancy allocation logic during at speed multibank test. The hardware results show a 312-MHz random access frequency and 800-MHz multibank frequency at 1.2 V, respectively.  相似文献   

2.
A modular architecture for a DRAM-integrated, multimedia chip with a data transfer rate of 6 to 12 Gbyte/s is proposed. The architecture offers the design flexibility in terms of both DRAM capacity and the logic-memory interface for use in a wide variety of applications. A DRAM macro built from cascadable DRAM bank modules having a 256-kb memory capacity and 128-b I/Os provides flexibility and reconfigurability of DRAM capacity and a high data transfer rate with an area of 6.4 mm2 /Mb. A data transfer circuit (called the “reconfigurable data I/O attachment”), which is attached to the I/O lines of the DRAM macro, provides a flexible logic-memory interface by changing the data-transfer routes between the DRAM macro and logic circuits in real time. A 6.4-Gbyte/s test chip (called the “media chip”) for three-dimensional computer graphics was fabricated to test the proposed design methodology. It integrates an 8-Mb DRAM and four pixel processors on an 8.35×14.6-mm chip by using a 0.4-μm CMOS design rule  相似文献   

3.
This paper describes a 4-Mb embedded DRAM macro using novel fast random cycle architecture with sense-synchronized read/write (SSR/SSW). The test chip has been fabricated with a 0.15-/spl mu/m logic-based embedded DRAM process and the 1.5-V 143-MHz no-wait row random access operation has been confirmed. Data retention power is suppressed to 92 /spl mu/W owing to the hierarchical power supply and SSR. The macro size is 4.59 mm/sup 2/. The cell occupation ratio of the macro is 46%, which is the same as that of a conventional embedded DRAM macro. The macro size and the data retention power are 30% and 4.6%, respectively, of a 4-Mb embedded SRAM macro fabricated by an identical process.  相似文献   

4.
An 8 Mb embedded DRAM has been developed. The salient feature of this embedded DRAM is page fault tolerance. Accessing across different pages can be performed using a minimum column cycle. This feature is achieved by placing a data latch and a transfer gate between the bit line sense amplifier and the column select gate. This DRAM can be reconfigured as separated 2 Mb units when it is embedded as a macro cell of an ASIC library  相似文献   

5.
DRAM macros in 4-Mb (0.8-μm) and 16-Mb (0.5-μm) DRAM process technology generations have been developed for CMOS ASIC applications. The macros use the same area efficient one transistor trench cells as 4-Mb (SPT cell) and 16-R Mb (MINT cell) DRAM products. It is shown that the trench cells with capacitor plates by the grounded substrate are ideal structures as embedded DRAM's. The trench cells built entirely under the silicon surface allow cost effective DRAM and CMOS logic merged process technologies. In the 0.8-μm rule, the DRAM macro has a 32-K×9-b configuration in a silicon area of 1.7×5.0 mm2 . It achieves a 27-ns access and a 50-ns cycle times. The other DRAM macro in the 0.5-μm technology is organized in 64 K×18 b. It has a macro area of 2.1×4.9 mm and demonstrated a 23-ns access and a 40-ns cycle times. Small densities and multiple bit data configurations provide a flexibility to ASIC designs and a wide variety of application capabilities. Multiple uses of the DRAM macros bring significant performance leverages to ASIC chips because of the wide data bus and the fast access and cycle times. A data rate more than 1.3 Gb/s is possible by a single chip. Some examples of actual DRAM macro embedded ASIC chips are shown  相似文献   

6.
This paper proposes the virtual-socket architecture in order to reduce the design turn-around time (TAT) of the embedded DRAM. The required memory density and the function of the embedded DRAM are system dependent. In the conventional design, the DRAM control circuitry with the DRAM memory array is handled as a hardware macro, resulting in the increase in design TAT. On the other hand, our proposed architecture provides the DRAM control circuitry as a software macro to take advantage of the automated tools based on synchronous circuit design. With array-generator technology, this architecture can achieve high quality and quick turn-around time (QTAT) of flexible embedded DRAM that is almost the same as the CMOS ASIC. We applied this virtual-socket architecture to the development of the 61-Mb synchronous DRAM core using 0.18-μm design rule and confirmed the high-speed operation, 166 MHz at CAS latency of two, and 180 MHz at that of three. The experimental results show that our proposed architecture can be applied to the development of the high-performance embedded DRAM with design QTAT  相似文献   

7.
We implemented 72-Mb direct Rambus DRAM with new memory architecture suitable for multibank. There are two novel schemes: flexible mapping redundancy (FMR) technique and additional refresh scheme. This paper shows that multibank reduces redundancy area efficiency. But with the FMR technique, this 16-bank DRAM realizes the same area efficiency as a single-bank DRAM. In other words, FMR reduces chip area by 13%. This paper also describes that additional refresh scheme reduces data retention power to 1/4. Its area efficiency is about four times better than that of the conventional redundancy approach  相似文献   

8.
A novel fast random cycle embedded RAM macro with dual-port interleaved DRAM architecture (D2RAM) has been developed. The macro exploits three key circuit techniques: dual-port interleaved DRAM architecture, two-stage pipelined circuit operation, and write before sensing. Random cycle time of 8 ns under worst-case conditions has been confirmed with a 0.25-μm embedded DRAM test chip. This is six times faster than conventional DRAM  相似文献   

9.
An embedded DRAM macro with a self-adjustable timing control (STC) scheme, a negative edge transmission scheme (NET), and a power-down data retention (PDDR) mode is developed. A 13.98-mm/sup 2/ 16-Mb embedded DRAM macro is fabricated in 0.13 /spl mu/m logic-based embedded DRAM process. Co-salicide word lines and MIM capacitors are used for high-speed array operation. The delay timing variation of 36 % for an RC delay can be reduced to 3.8% by using the STC scheme. The NET scheme transfers array control signals to local array blocks with high accuracy. Thereby, the test chip achieves 1.2-V 312-MHz random cycle operation even in the low-power process. 73-/spl mu/W data retention power is realized by using the PDDR mode, which is 5% of conventional schemes.  相似文献   

10.
The authors describe a high-speed DRAM (HSDRAM), designed primarily for high performance, while retaining the density advantage of the one-transistor DRAM cell. The 128-kb×4, 78-mm2 chip shows a random access time of 20 ns and a column access time of 7.5 ns, measured at 5.0 V, 25°C, and 50-pF load. A 256-b×4 high-speed page mode is provided which has 12-ns cycle into 60 pF, resulting in a data rate of 330 Mb/s. Additional measurements on HSDRAM further demonstrate that DRAM operation in a high-speed regime is not precluded by noise, power, wiring delay, and soft error rate. The device is implemented in a 1.0 μm n-well CMOS process  相似文献   

11.
Integrated layer processing (ILP) is an implementation concept that “permits the implementor the option of performing all the (data) manipulation steps in one or two integrated processing loops”. To estimate the achievable benefits of ILP, a file transfer application with an encryption function on top of a user-level TCP has been implemented and the performance of the application in terms of throughput and packet processing times has been measured. The results show that it is possible to obtain performance benefits by integrating marshalling, encryption, and TCP checksum calculation. The experiments yielded in a throughput gain of only 10-20% in contrast to the 50% gain achieved for simple loop experiments. Simulations of memory access and cache hit rate show that the main benefit of ILP is reduced memory access rather than an improved cache hit rate. ILP reduced the number of memory accesses up to 30% in the experiment, but the relative amount of cache misses could not be reduced compared to a carefully designed non-ILP implementation. The results also show that data manipulation characteristics may significantly influence the cache behavior and the achievable performance gain of ILP. Considering these results, ILP can only be recommended in cases where the the ILP loop consists of several, but very simple data manipulations without complex calculations over the data  相似文献   

12.
This paper describes a 5-GByte/s data-transfer scheme suitable for synchronous DRAM memory. To achieve a higher data-transfer frequency, the properties were improved based on the frequency analysis of the memory system. Then, a bit-to-bit skew compensation technique that eliminates incongruent skew between the signals is described with a new, multioutput controlled delay circuit to accomplish bit-to-bit skew compensation by controlling transmission timing of every data bit. Simulated maximum data-transfer rate of the proposed memory system resulted in 5.1/5.8 GByte/s (321/365 MHz, ×64 bit, double data rate) for data write/read operation, respectively  相似文献   

13.
This paper describes a 32-Mb embedded DRAM macro fabricated using 0.13-μm triple-well 4-level Cu embedded DRAM technology, which is suitable for portable equipment of MPEG applications. This macro can operate 230-MHz random column access even at 1.0-V power supply condition. The peak power consumption is suppressed to 198 mW in burst operation. The power-down standby mode, which suppresses the leakage current consumption of peripheral circuitry, is also prepared for portable equipment. With the collaboration of array circuit design and the fine Cu metallization technology, macro size of 18.9 mm2 and cell efficiency of 51.3% are realized even with dual interface and triple test functions implemented  相似文献   

14.
This paper describes a standard CMOS process based on embedded DRAM macro with dual-port interleaved DRAM architecture (D/sup 2/RAM), which is suitable for the leading edge CMOS LSIs with both high-speed and large-scale memories on a chip. This macro exploits three key technologies: fully sense-signal-loss compensating technology based on the whole detailed noise element breakdowns, the novel striped trench capacitor (STC) cell, and the write-before-sensing (WBS) circuit by decoded write-bus. A 400-MHz random cycle access has been verified for D/sup 2/RAM fabricated by a 0.15-/spl mu/m standard CMOS process.  相似文献   

15.
A 256K DRAM designed for a variety of organizations and operation modes is described. The chip may be organized as 64K/spl times/4, 128K/spl times/2, or 256K/spl times/1. Four data I/O buffers are selectable by gate signals. Besides the standard RAM mode, it may be operated in the page mode, in the parallel or serial buffer mode, and in a combination of page and serial buffer modes. With these options, the design covers a wide range of applications. RAS/CAS access times are 80.55 ns. In the combined page and serial buffer mode, a data rate of up to 50 MHz is possible. The chip is built in metal-gate n-channel technology with 2-/spl mu/m minimum line width and two metal interconnection planes.  相似文献   

16.
A single 50V supply 1-Mb DRAM using a half V/SUB cc/ biased memory cell with a reduced electric field of 2 MV/cm and a shared sensing scheme for reasonable cell signal is described. A testability concept which allows 1/4 reduced test time, page/nibble functions including a continuous nibble mode, and an effective redundancy circuit are also described. A typical access time of 90 ns has been obtained using a titanium polycide world-line technology.  相似文献   

17.
This paper describes a 256 Mb DRAM chip architecture which provides up to ×32 wide organization. In order to minimize the die size, three new techniques: an exchangeable hierarchical data line structure, an irregular sense amp layout, and a split address bus with local redrive scheme in the both-ends DQ were introduced. A chip has been developed based on the architecture with 0.25 μm CMOS technology. The chip measures 13.25 mm×21.55 mm, which is the smallest 256 Mb DRAM ever reported. A row address strobe (RAS) access time of 26 ns was obtained under 2.8 V power supply and 85°C. In addition, a 100 MHz×32 page mode operation, namely 400 M byte/s data rate, in the standard extended data output (EDO) cycle has been successfully demonstrated  相似文献   

18.
A 16 Mb embedded DRAM macro in a fully CMOS logic compatible 90 nm process with a low noise core architecture and a high-accuracy post-fabrication tuning scheme has been developed. Based on the proposed techniques, 61% improvement of the sensing accuracy is realized. Even with the smallest 5 fF/cell capacitance, a 322 MHz random-cycle access while 32 ms data retention time which contributes to save the data retention power down to 60 /spl mu/W are achieved.  相似文献   

19.
For memory‐based big data storage, using hybrid memories consisting of both dynamic random‐access memory (DRAM) and non‐volatile random‐access memories (NVRAMs) is a promising approach. DRAM supports low access time but consumes much energy, whereas NVRAMs have high access time but do not need energy to retain data. In this paper, we propose a new data migration method that can dynamically move data pages into the most appropriate memories to exploit their strengths and alleviate their weaknesses. We predict the access frequency values of the data pages and then measure comprehensively the gains and costs of each placement choice based on these predicted values. Next, we compute the potential benefits of all choices for each candidate page to make page migration decisions. Extensive experiments show that our method improves over the existing ones the access response time by as much as a factor of four, with similar rates of energy consumption.  相似文献   

20.
A novel dataline redundancy suitable for an embedded DRAM macro with wide data bus is presented. This redundancy reduces the area required for spare cells from 6 to 1.6% of the area required for normal cells and improves chip yield from 50 to 80%. In addition, it provides a high-speed data path. An embedded DRAM macro adopting the redundancy achieves 200-MHz operation and provides 51.2-Gbit/s bandwidth. It has been fabricated with 0.25-μm technology  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号