共查询到20条相似文献,搜索用时 437 毫秒
1.
Random access memory organizations typically are chosen for maximum reliability, based on the operation of the memory box itself without concern for the remainder of the computing system. This had led to widespread use of the 1-bit-per-chip, or related organization which uses error correcting codes to minimize the effects of failures occurring in some basic unit such as a word or double word (32 to 64 bits). Such memory boxes are used quite commonly in paged virtual memory systems where the unit for protection is really a page (4K bytes), or in a cache where the unit for protection is a block (32 to 128 bytes), not a double word. With typical high density memory chips and typical ranges of failure rates, the 1-bit-per-chip organization can often maximize page failures in a virtual memory system. For typical cases, a paged virtual memory using a page-per-chip organization can substantially improve reliability, and is potentially far superior to other organizations. This paper first describes the fundamental considerations of organization for memory systems and demonstrates the underlying problems with a simplified case. Then the reliability in terms of lost pages per megabyte due to hard failures over any time period is analyzed for a paged virtual memory organized in both ways. Normalized curves give the lost pages per Mbyte as a function of failure rate and accumulated time. Assuming reasonable failure rates can be achieved, the page-per-chip organization can be 10 to 20 times more reliable than a 1-bit-per-chip scheme. 相似文献
2.
Jing Lee 《Microelectronics Reliability》2006,46(5-6):973-983
This paper deals with placing chips on an MCM substrate in chip array style for minimizing the system failure rate. The placement procedure begins with constructing an initial placement based on cooling considerations. Then, a thermal-force model is presented to transform the reliability-driven placement problem to solve a set of simultaneous nonlinear equations to determine thermal-force-equilibrium locations of the chips. A modified Newton–Raphson method is used to solve this system of equations. Finally, a chip assignment procedure transforms the thermal-force-equilibrium placement into an array style placement for minimum thermal distortion. Two assignment methods are developed and compared each other. Experiments on three industrial MCMs designed by IBM show that the obtained placements have significant improvements to their original designs in system reliability. Additionally, a simulated annealing approach is presented for justifying the performance of the proposed method. 相似文献
3.
Techniques for Disturb Fault Collapsing 总被引:1,自引:1,他引:0
Disturb faults are considered one the most important failure modes in non volatile memories. Disturb faults are highly dependant
on the core memory cell structure, manufacturing technology, and array organization. In this paper, we analyze the origins
of such disturbs and propose a method that uses cell structure and array organization information to identify the relevant
disturbs and to create a reduced fault list. To demonstrates its effectiveness, the method was used to create minimized fault
lists for NOR and NAND flash memory arrays. Moreover, we show how the reduced fault list developed can be used to devise more
efficient test algorithms.
This work was supported by Kuwait University Research Grant Number EO 01/04. 相似文献
4.
Kanad Chakraborty 《Journal of Electronic Testing》2004,20(1):89-108
Application-specific integrated circuits (ASICs) and high-performance processors such as Itanium and Compaq Alpha use a total of almost 75% of chip real estate for accommodating various types of embedded (or on-chip) memories. Although most of these embedded memories are single-port static (and in relatively few cases, dynamic) RAMs today, the high demand for bandwidth in digital television, fast signal processing, and high-speed networking applications will also fuel the need for on-chip multiport memories in the foreseeable future. The reliability of a complex VLSI chip will depend largely on the reliability of these embedded memory blocks. With device dimensions moving rapidly toward the ultimate physical limits of device scaling, which is in the regime of feature sizes of 50 nm or so, a host of complex failure modes is expected to occur in memory circuits. This tutorial underlines the need for appropriate testing and reliability techniques for the present to the next generation of embedded RAMs. Topics covered include: reliability and quality testing, fault modeling, advanced built-in self-test (BIST), built-in self-diagnosis (BISD), and built-in self-repair (BISR) techniques for high-bandwidth embedded RAMs. 相似文献
5.
6.
This paper evaluates the reliability of a memory system incorporating any sort of linear error-correcting code. If the failure hypothesis is too simple (viz, a failure affects the entire memory chip or only one memory bit) an evaluation of reliability can be wrong. The following considerations are thus important: 1. The failure model is based on the internal design of the memory chip. 2. The memory system hardware is accurately accounted for. The resulting model is very close to the hardware implementation and depends on six parameters. The model is very useful for easily comparing memory systems and for deriving tradeoffs among the implementation possibilities for the design of memory systems. 相似文献
7.
《Semiconductor Manufacturing, IEEE Transactions on》2005,18(3):412-421
This paper provides a detailed analysis of the yield of embedded static random access memories (eSRAM) generated using a compiler. Defect and fault analysis inclusive of industrial data are presented for these chips by taking into account the design and the physical properties of the layout. A new tool called compiler-based Array Yield Analysis (CAYA) is introduced. CAYA allows for a characterization of the design process which accounts for fault types and the relation between functional and structural faults; moreover, it also relies on a novel empirical model which facilitates yield calculation. Industrial data is provided for the analysis of various configurations with different structures and redundancy. Architectural considerations, such as array size and line (word and bit) organization are analyzed. Compiler-based features of different kernels (such as required for decoding) are also treated in detail. An extensive evaluation of the provided redundancy (row, column, and combined) is pursued to characterize its impact on the memory yield. Industrial data is used in the evaluation and an industrial ASIC chip (made of multiple eSRAMs) is also considered as design case. 相似文献
8.
静态电流测试是一种高灵敏度、低成本的集成电路失效分析技术,在集成电路故障检测、可靠性测试及筛选中的应用日益普遍。针对某绝缘体上硅专用集成电路在老炼和热冲击实验后出现的静态电流测试失效现象,结合样品伏安特性、光发射显微镜和扫描电子显微镜等电学和物理失效分析手段,确定了栅氧化层中物理缺陷的存在、位置及类型;结合栅氧化层经时介质击穿原理分析,揭示了样品的主要失效机理,并分析了经时介质击穿失效的根源,为改进工艺、提高电路可靠性提供了依据。 相似文献
9.
10.
In this paper two dynamic configuration schemes are discussed for megabit BiCMOS static random access memories (SRAMs). Dynamic reconfiguration schemes allows failure detection at the chip level and automatic reconfiguration to fault free memory cells within the chip. The first scheme is a standby system approach where the I/O lines of the memory can be dynamically switched to spare bit slices in the SRAM. This scheme is implemented through a switching network at the memory interface. Every memory access is controlled by a fault status table (FST) which memorizes the fault conditions of each memory block. This FST is implemented outside the memory system. A second dynamic reconfiguration scheme for BiCMOS SRAMs is addressed through a graceful degradation approach. Basic design considerations and performance evaluation of megabit BiCMOS SRAMs using dynamic reconfiguration schemes are presented. The basic properties of the proposed schemes and a prototype VLSI chip implementation details are discussed. BiCMOS SRAM access time improvement of about 35%, chip area of 25%, and chip yield of 10% are achieved, respectively, as compared to conventional methods. A comparison of reliability improvement of 1 Mb BiCMOS SRAMs using dynamic configuration schemes is presented. These two dynamic reconfiguration schemes have considerable importance in reliability improvement when compared to conventional methods. The major advantage is that the size of reconfiguration of the system can be considerably reduced. 相似文献
11.
Amer Hassanein Hamed Iyer Ravishankar Krishnan 《Reliability, IEEE Transactions on》1986,35(4):377-379
This paper investigates the effect of uncertainty in chip failure rates on memory system reliability. It is shown, using real data on memory failures, that the dispersion in failure rates can be as large as 80%. An important consequence is to increase the unreliability of a memory system by up to 65%. Two simple models are proposed to evaluate the variability in memory reliability. The first is a worst case estimate and the second is a probabilistic model which needs only the mean and the standard deviation of the chip failure rate. With high failure rates, the maximum uncertainty in reliability occurs in the early system lifetime; with low failure rates, this effect is reversed. 相似文献
12.
在线迁移在云计算平台下已广泛使用,虚拟机内存迁移主要采用的是预拷贝算法.传统预拷贝算法在迭代过程中会将脏页反复重传导致迁移时间较长,针对这一问题文章提出了在拷贝过程中增加脏页预测算法和内存压缩算法相结合的方法.脏页预测算法是采用统计方法,依据内存页历史操作情况对改动频繁的内存页进行标记,根据标记决定是否在最后一轮内存迁移中传输,为了减少迁移传输量,先通过内存压缩算法将其压缩,然后再传输.实验结果表明,改进后的方法有效地降低了停机时间和迁移时间,提高了迁移效率,达到了更快迁移的目的. 相似文献
13.
Koike H. Amanuma K. Miwa T. Yamada J. Toyoshima H. 《Semiconductor Manufacturing, IEEE Transactions on》2002,15(2):201-208
A retention analysis method for ferroelectric random access memory (FeRAM) was developed, in which read signal voltages from memory cells are measured. The method uses on-chip sample/hold circuits, an off-chip A/D converter, and memory large-scale integration testing equipment. FeRAM chip retention lifetime can be estimated on the basis of FeRAM read signal voltages after retention periods of one day and upwards. When used as a tool to estimate long-term data retention in FeRAM chips and to analyze fluctuations in memory cell characteristics, this method can provide useful information about FeRAM reliability 相似文献
14.
Felipe Silva Jardel Silveira Jarbas Silveira César Marcon Fabian Vargas Otávio LimaJr 《Journal of Electronic Testing》2018,34(4):417-433
As the microelectronics technology continuously advances to deep submicron scales, the occurrence of Multiple Cell Upset (MCU) induced by radiation in memory devices becomes more likely to happen. The implementation of a robust Error Correction Code (ECC) is a suitable solution. However, the more complex an ECC, the more delay, area usage and energy consumption. An ECC with an appropriate balance between error coverage and computational cost is essential for applications where fault tolerance is heavily needed, and the energy resources are scarce. This paper describes the conception, implementation, and evaluation of Column-Line-Code (CLC), a novel algorithm for the detection and correction of MCU in memory devices, which combines extended Hamming code and parity bits. Besides, this paper evaluates the variation of the 2D CLC schemes and proposes an additional operation to correct more MCU patterns called extended mode. We compared the implementation cost, reliability level, detection/correction rate and the mean time to failure among the CLC versions and other correction codes, proving the CLCs have high MCU correction efficacy with reduced area, power and delay costs. 相似文献
15.
16.
Eiji Takeda Takao Watanabe Shinichiro Kimura Jiro Yugami Keiichi Haraguchi Kei Suzuki Katsuro Sasaki 《Microelectronics Reliability》2000,40(6)
The progress of silicon technology is opening the era of “systems on silicon” in which a large-scale memory, a CPU, and other logic macros will be integrated on a single chip. These kinds of chips, called system LSIs, have an especially promising future in mobile and multimedia applications but face inherent technical problems related to the reliability of ultrathin oxide film, conflict in the processing of different components, increased gate and subthreshold leakage currents, memory bottlenecks, and design complexity. This paper reviews the system LSIs and then introduces related technologies in processing, circuits, chip architecture, and design. It also discusses the influence of the system LSIs on business strategies. 相似文献
17.
With continuous scaling on CMOS device dimensions, it is becoming increasingly challenging for conventional failure analysis (FA) methods to identify the failure mechanism at the circuit level in an integrated chip. Scanning Electron Microscopy (SEM) based nanoprobing is becoming an increasingly critical tool for identifying non-visual failures via electrical characterization in current electrical FA metrology for fault isolation since 2006 Toh et al. (2007), Shen et al. (2007), Ng et al. (2012) . Currently, most of the nanoprobing fault isolation is nanoprobe in top-down planar direction, such as nanoprobe on via, contact and metal line. This paper focused on fault isolation of sub-micron devices by nanoprobing on a cross-sectional plane. This is a new application area; it is very useful for sample that cannot perform fault isolation with conventional top-down planar nanoprobing, especially on non-volatile memory that with single transistor memory array that arrange in a vertical direction, such as Magnetic Random Access Memory (MRAM), Phase-Change Random Access Memory (PC-RAM), flash memory and etc. 相似文献
18.
Kwabena A. Boahen 《Analog Integrated Circuits and Signal Processing》1997,13(1-2):53-68
I describe a vision system that uses neurobiologicalprinciples to perform all four major operations found in biologicalretinae: (1) continuous sensing for detection, (2) local automaticgain control for amplification, (3) spatiotemporal bandpass filteringfor preprocessing, and (4) adaptive sampling for quantization.All four operations are performed at the pixel level. The systemincludes a random-access time-division multiplexed communicationchannel that reads out asynchronous pulse trains from a 64×64 pixel array in the imager chip, and transmitsthem to corresponding locations on a second chip that has a 64×64 array of integrators. Both chips are fully functional.I compare and contrast the design principles of the retina withthe standard practice in imager design and analyze the circuitsused to amplify, filter, and quantize the visual signal, withemphasis on the performance trade-offs inherent in the circuittopologies used. 相似文献
19.
Effect of substrate flexibility on solder joint reliability. Part II: finite element modeling 总被引:1,自引:1,他引:0
Solder joint fatigue failure is a serious reliability concern in area array technologies, such as flip chip and ball grid array packages of integrated-circuit chips. The selection of different substrate materials could affect solder joint thermal fatigue life significantly. The mechanism of substrate flexibility on improving solder joint thermal fatigue was investigated by thermal mechanical analysis (TMA) technique and finite element modeling. The reliability of solder joints in real flip chip assembly with both rigid and compliant substrates was evaluated by accelerated temperature cycling test. Finite element simulations were conducted to study the reliability of solder joints in flip chip on flex assembly (FCOF) and flip chip on rigid board assembly (FCOB) applying Anand model. Based on the finite element analysis results, the fatigue lives of solder joints were obtained by Darveaux’s crack initiation and growth model. The thermal strain/stress in solder joints of flip chip assemblies with different substrates were compared. The results of finite element analysis showed a good agreement with the experimental results. It was found that the thermal fatigue lifetime of FCOF solder joints was much longer than that of FCOB solder joints. The thermal strain/stress in solder joints could be reduced by flex buckling or bending and flex substrates could dissipate energy that otherwise would be absorbed by solder joints. It was concluded that substrate flexibility has a great effect on solder joint reliability and the reliability improvement was attributed to flex buckling or bending during temperature cycling. 相似文献
20.
《Advanced Packaging, IEEE Transactions on》2005,28(3):413-420
This paper presents a new package design for multichip modules. The developed package has a flip-chip-on-chip structure. Four chips [simulating dynamic random access memory (DRAM) chips for demonstration purpose] are assembled on a silicon chip carrier with eutectic solder joints. The I/Os of the four chips are fanned-in on the silicon chip carrier to form an area array with larger solder balls. A through-silicon via (TSV) hole is made at the center of the silicon chip carrier for optional underfill dispensing. The whole multichip module is mounted on the printed circuit board by the standard surface mount reflow process. After the board level assembly and X-ray inspection, the underfill process is applied to some selected specimens for comparative study purpose. The underfill material is dispensed through the center TSV hole on the silicon chip carrier to encapsulate the solder joints and the four smaller chips. Subsequently, scanning acoustic microscopy (SAM) is performed to inspect the quality of underfill. After the board-level assembly, all specimens are subject to the accelerated temperature cycling (ATC) test. During the ATC test, the electrical resistance of all specimens is monitored. The experimental results show that the packages without underfill encapsulation may fail in less than 100 thermal cycles while those with underfill can last for more than 1200 cycles. From the dye ink analysis and the cross-section inspection, it is identified that the packages without underfill have failure in the silicon chip carrier, instead of solder joints. The features and merits of the present package design are discussed in details in this paper. 相似文献