首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Soft errors, due to cosmic radiations, are one of the major challenges for reliable VLSI designs. In this paper, we present a symbolic framework to model soft errors in both synchronous and asynchronous designs. The proposed methodology utilizes Multiway Decision Graphs (MDGs) and glitch-propagation sets (GP sets) to obtain soft error rate (SER) estimation at gate level. This work helps mitigate design for testability (DFT) issues in relation to identifying the controllable and the observable circuit nodes, when the circuit is subject to soft errors. Also, this methodology allows designers to apply radiation tolerance techniques on reduced sets of internal nodes. To demonstrate the effectiveness of our technique, several ISCAS89 sequential and combinational benchmark circuits, and multiple asynchronous handshake circuits have been analyzed. Results indicate that the proposed technique is on average 4.29 times faster than the best contemporary state-of-the-art techniques. The proposed technique is capable to exhaustively identify soft error glitch propagation paths, which are then used to estimate the SER. To the best of our knowledge, this is the first time that a decision diagram based soft error identification approach is proposed for asynchronous circuits.  相似文献   

2.
Since thermal responses of the drive current in recent 3D FinFET and conventional planar transistors are different, addressing performance and reliability in advanced VLSI circuits must be reconsidered. This study investigates temperature effects on two of the most problematic reliability issues in modern logic circuits, namely Bias Temperature Instability (BTI) and soft errors. In particular, we initially examine the inversion of temperature effect that strengthens the drive current in 14-nm bulk tri-gate FinFETs with increasing temperature, and model it as a source of threshold voltage reduction. This temperature-induced threshold voltage variation is consequently adapted into our proposed simulation and analysis framework for BTI degradation in large combinational circuits. The BTI aging results from our proposed estimation are more pessimistic than that from the conventional approach where the temperature effect is excluded. Simulation results show that long-term BTI aging delay worsens as temperature increases, yet the domination of thermal effect on the drive current leads to overall performance improvement in all circuits under 10-year BTI stress. In addition, soft errors and their masking probabilities in logic circuits are addressed under the inversion of temperature effect and supply voltage variation. The results reveal that soft error immunity in all experimental circuits improves significantly with increasing supply voltage and temperature, mainly due to the increase of critical charge. The average relative soft error rate when the supply voltage changes from 0.4 V to 0.6 V and 0.8 V at 0 °C is as low as 3.7% and 0.08% of the average result at 0.4 V, respectively. On average, the relative soft error rate at a particular supply voltage when temperature changes from 0 °C to 40 °C, 80 °C, and 120 °C is around 70%, 50%, and 30% of the average result at 0 °C, respectively.  相似文献   

3.
Besides the advantages brought by technology scaling, soft errors have emerged as an important reliability challenge for nanoscale combinational circuits. Hence, it is important for vulnerability analysis of digital circuits due to soft errors to take advantage of practical metrics to achieve cost-effective and reliable designs. In this paper, a new metric called Triple Constraint Satisfaction probability (TCS) is proposed to evaluate the soft error vulnerability of combinational circuits. TCS is based on a concept called Probabilistic Vulnerability Window (PVW) which is an inference of the necessary conditions for soft-error occurrence in the circuit. We propose a computation model to calculate the PVW’s for all circuit gate outputs. In order to show the efficiency of the proposed metric, TCS is used in the vulnerability ranking of the circuit gates as the basic step of the vulnerability reduction techniques. The experimental results show that TCS provides a distribution of soft error vulnerability similar to that obtained with fault injections performed with HSPICE or with an event driven simulator while it is more than three orders of magnitude faster. Also, the results show that using the proposed metric in the well-known filter insertion technique achieves up to 19.4%, 34.1%, and 55% in soft error vulnerability reduction of benchmark circuits with the cost of increasing the area overhead by 5%, 10%, and 20%, respectively.  相似文献   

4.
In this work, to increase the reliability of low power digital circuits in the presence of soft errors, the use of both III-V TFET- and III-V MOSFET-based gates is proposed. The hybridization exploits the facts that the transient currents generated by particle hits in TFET devices are smaller compared to those of the MOSFET-based devices while MOSFET-based gates are superior in terms of electrical masking of soft errors. In this approach, the circuit is basically implemented using InAs TFET devices to reduce the power and energy consumption while gates that can propagate generated soft errors are implemented using InAs MOSFET devices. The decision about replacing a subset of TFET-based gates by their corresponding MOSFET-based gates is made through a heuristic algorithm. Furthermore, by exploiting advantages of TFETs and MOSFETs, a hybrid TFET-MOSFET soft-error resilient and low power master-slave flip-flop is introduced. To assess the efficacy of the proposed approach, the proposed hybridization algorithm is applied to some sequential circuits of ISCAS’89 benchmark package. Simulation results show that the soft error rate of the TFET-MOSFET-based circuits due to particle hits are up to 90% smaller than that of the purely TFET-based circuits. Furthermore, energy and leakage power consumptions of the proposed hybrid circuits are up to 79% and 70%, respectively, smaller than those of the MOSFET-only designs.  相似文献   

5.
Due to the increased complexity of modern digital circuits, the use of simulation-based soft error detection methods has become cumbersome and very time-consuming. FPGA-based emulation provides an attractive alternative, as it can not only provide faster speed, but also handle highly complex circuits. In this work, a novel FPGA-based soft error detection technique is proposed, which enables detection of soft errors resulting from voltage pulses of different magnitudes induced by single-event transients (SETs). The paper analyzes the effect of transient injection location on soft error rate (SER) and applies the idea of transient equivalence to minimize resource overhead as well as speed-up emulation process. Switch-level implementations of ISCAS’85 benchmarks are designed using gate-level structures and experimental results are reported. The results show that an application of transient equivalence results in an emulation speed-up of 2.875 and reduces the memory utilization by 65%. An average soft error rate (SER) of 0.7-0.8 was achieved using the proposed strength-based detection with drain as transient injection location, showing that voltage pulses of magnitude smaller than logic threshold can eventually result in soft errors. Furthermore, the presented emulation-based soft error detection technique achieved significant speed-up of the order of 106 compared to a customized simulation-based method.  相似文献   

6.
By technology down scaling in nowadays digital circuits, their sensitivity to radiation effects increases, making the occurrence of soft errors more probable. As a consequence, soft error rate estimation of complex circuits such as processors is becoming an important issue in safety- and mission-critical applications. Fault injection is a well-known and widely used approach for soft error rate estimation. Development of previous FPGA-based fault injection techniques is very time consuming mainly because they do not adequately exploit supplementary FPGA tools. This paper proposes an easy-to-develop and flexible FPGA-based fault injection technique. This technique utilizes debugging facilities of Altera FPGAs in order to inject single event upset (SEU) and multiple bit upset (MBU) fault models in both flip-flops and memory units. As this technique uses FPGA built-in facilities, it imposes negligible performance and area overheads on the system. The experimental results show that the proposed technique is on average four orders of magnitude faster than a pure simulation-based fault injection. These features make the proposed technique applicable to industrial-scale circuits.  相似文献   

7.
In VLSIs, soft errors resulting from radiation-induced transient pulses frequently occur. In recent high-density and low-power VLSIs, the operation of systems is seriously affected by not only soft errors occurring on memory systems and the latches of logic circuits but also those occurring on the combinational parts of logic circuits. The existing tolerant methods for soft errors on the combinational parts do not provide enough high tolerant capability with small performance penalty. This paper proposes a class of soft error masking circuits by using a Schmitt trigger circuit and a pass transistor. The paper also presents a construction of soft error masking latches (SEM-latches) capable of masking transient pulses occurring on combinational circuits. Moreover, simulation results show that the proposed method has higher soft error tolerant capability than the existing methods. For supply voltage V DD ?=?3.3 V, the proposed method is capable of masking transient pulses of magnitude 4.0 V or less.  相似文献   

8.
Glitches due to the secondary neutron particles from cosmic rays cause soft errors in integrated circuits (IC) that are becoming a major threat in modern sub 45nm ICs. Therefore, researchers have developed many techniques to mitigate the soft errors and some of them utilize the built in error detection schemes of low-power asynchronous null conventional logic (NCL). However, it requires extensive simulations and emulations for careful and complete analysis of the design, which can be costly, time consuming and cannot encompass all the possible input conditions. In this paper, we propose a framework to improve the soft error tolerant asynchronous pipelines by identifying and formally analyzing the vulnerable paths using the nuXmv model checker. The proposed framework translates the design behavior and specification into a state-space model and the potential vulnerabilities against soft errors in the pipeline as linear temporal logical (LTL) properties. These formally specified properties are then verified on the state-space model and in case of failure counterexamples are obtained. These counterexamples can then be further analyzed to obtain the soft error propagation paths and thus give insights about soft error tolerant approaches to the designers. For illustration, this work provides an analysis and comparison of three state-of-the-art asynchronous pipelines. Formal model and analysis of all the pipelines show that the soft error hardened pipeline is comparatively superior against soft errors but at the expense of almost two times area overhead.  相似文献   

9.
传统的概率转移矩阵(Probabilistic Transfer Matrix,PTM)方法是一种能够比较精确地估计软差错对门级电路可靠度影响的方法,但现有的方法只适用于组合逻辑电路的可靠度估计.本文提出基于PTM的时序电路可靠度估计方法(reliability estimation of Sequential circuits based on PTM,S-PTM),先把待评估时序电路划分为输出逻辑模块和次态逻辑模块,然后用本文提出的时序电路PIM计算模型得到电路的PIM,最后根据输入信号的概率分布计算出时序电路的可靠度.用ISCAS 89基准电路为对象进行实验和验证,实验表明所提方法是准确和合理的.  相似文献   

10.
Due to the continuous technology scaling, soft error becomes a major reliability issue at nanoscale technologies. Single or multiple event transients at low levels can result in multiple correlated bit flips at logic or higher abstraction levels. Addressing this correlation is essential for accurate low-level soft error rate estimation, and more importantly, for the cross-level error abstraction, e.g. from bit errors at logic level to word errors at register-transfer level. This paper proposes a novel error estimation method to take into consideration both signal and error correlations. It unifies the treatment of error-free signals and erroneous signals, so that the computation of error probabilities and correlations can be carried out using techniques for signal probabilities and correlations calculation. The proposed method not only reports accurate error probabilities when internal gates are impaired by soft errors, but also gives quantification of the error correlations in their propagation process. This feature enables our method to be a versatile technique used in high-level error estimation. The experimental results validate our proposed technique showing that compared with Monte-Carlo simulation, it is 5 orders of magnitude faster, while the average inaccuracy of error probability estimation is only 0.02.  相似文献   

11.
Soft error modeling and remediation techniques in ASIC designs   总被引:1,自引:0,他引:1  
Soft errors due to cosmic radiations are the main reliability threat during lifetime operation of digital systems. Fast and accurate estimation of soft error rate (SER) is essential in obtaining the reliability parameters of a digital system in order to balance reliability, performance, and cost of the system. Previous techniques for SER estimation are mainly based on fault injection and random simulations. In this paper, we present an analytical SER modeling technique for ASIC designs that can significantly reduce SER estimation time while achieving very high accuracy. This technique can be used for both combinational and sequential circuits. We also present an approach to obtain uncertainty bounds on estimated error propagation probability (EPP) values used in our SER modeling framework. Comparison of this method with the Monte-Carlo fault injection and simulation approach confirms the accuracy and speed-up of the presented technique for both the computed EPP values and uncertainty bounds.Based on our SER estimation framework, we also present efficient soft error hardening techniques based on selective gate resizing to maximize soft error suppression for the entire logic-level design while minimizing area and delay penalties. Experimental results confirm that these techniques are able to significantly reduce soft error rate with modest area and delay overhead.  相似文献   

12.
Shrinking the transistors size and supply voltage in the advanced VLSI logic circuits, significantly increases the susceptibility of the circuits to soft errors. Therefore, analysis of the effects on other nodes, caused by the soft errors occurring at each individual node is an essential step for VLSI logic circuit design. In this paper, a novel approach based on the Mason’s gain formula, for the node-to-node sensitivity analysis of logic circuits is proposed. Taking advantage of matrix sparsity, the runtime and the memory requirement of the proposed approach become scalable. Also, taking the effects of reconvergent paths into account, the accuracy of the proposed approach is improved considerably. According to the simulation results, the proposed approach runs 4.7× faster than those proposed in the prior works while its computational complexity is O(N1.07) on the average.  相似文献   

13.
Error correction codes (ECCs) are commonly used to deal with soft errors in memory applications. Typically, Single Error Correction-Double Error Detection (SEC-DED) codes are widely used due to their simplicity. However, the phenomenon of more than one error in the memory cells has become more serious in advanced technologies. Single Error Correction-Double Adjacent Error Correction (SEC-DAEC) codes are a good choice to protect memories against double adjacent errors that are a major multiple error pattern. An important consideration is that the ECC encoder and decoder circuits can also be affected by soft errors, which will corrupt the memory data. In this paper, a method to design fault tolerant encoders for SEC-DAEC codes is proposed. It is based on the fact that soft errors in the encoder have a similar effect to soft errors in a memory word and achieved by using logic sharing blocks for every two adjacent parity bits. In the proposed scheme, one soft error in the encoder can cause at most two errors on adjacent parity bits, thus the correctness of memory data can be ensured because those errors are correctable by the SEC-DAEC code. The proposed scheme has been implemented and the results show that it requires less circuit area and power than the encoders protected by the existing methods.  相似文献   

14.
This paper presents a novel framework for accurate estimation of key statistical parameters of the subthreshold-and gate-leakage distributions of a chip under parameter variations while considering both within-die and die-to-die variabilities in process (P), temperature (T), and supply voltage (V). For the first time, temperature variations and, more importantly, electrothermal couplings between junction (substrate or die) temperature and leakage power have been accounted for in a full-chip leakage estimation methodology. In the proposed framework, instead of exact leakage distribution profile, its statistically important parameters, such as nominal value and spread, are computed. Initially, at the transistor level, a quantitative analysis of the relative sensitivities of device leakage components to P-T-V variations is performed to extract a transistor-level variation model. It is shown that the proposed statistical model, as compared to others in the literature, shows better agreement with BSIM1 model-based simulations. It is also demonstrated that failing to account for temperature variations and electrothermal couplings can result in significant inaccuracy in chip-level leakage estimation. Furthermore, the full-chip leakage-power distribution is used to estimate the leakage-constrained yield under the impact of variations. The calculations show that yield is significantly lowered due to the within-die and die-to-die process and temperature variations. Subsequently, the proposed framework is applied in the leakage estimation of complex logic circuits with a consideration of spatial correlations of process parameters and transistor stacking effects.  相似文献   

15.
Cosmic-ray soft errors from ground level to aircraft flight altitudes are caused mainly by neutrons. We derived an empirical model for estimation of soft error rate (SER). Test circuits were fabricated in a standard 0.6-μm CMOS process. The neutron SER dependence on the critical charge and supply voltage was measured. Time constants of the noise current were extracted from the measurements and compared with device simulations in three dimensions. The empirical model was calibrated and verified by independent SER measurements. The model is capable of predicting cosmic-ray neutron SER of any circuit manufactured in the same process as the test circuits. We predicted SER of a static memory cell  相似文献   

16.
Modern nanometer circuits have become more prone to soft errors necessitating faster and more reliable error detection techniques. Simulation-based soft error detection has been popular but is limited by its inability to handle complex circuits and high run-time. FPGA-based soft error detection methods can be effectively used to overcome the speed limitation of simulation as well as handle circuits with much higher complexity. The paper presents a novel strength-based soft error emulation method targeting soft errors caused by transient pulses of magnitude less than logic threshold. The impact of transient injection location on soft error coverage is analyzed and the idea of using drain of a transistor as transient injection location is presented. Furthermore, the concept of transient equivalence is applied to minimize resource overhead as well as speed-up soft error detection process. Advanced switch-level models are designed using gate-level structure and used to implement switch-level equivalents of ISCAS’85 benchmarks. The experimental results reported for ISCAS’85 benchmarks show that an average soft error coverage of 0.7-0.8 was achieved using the proposed strength-based detection with drain as transient injection location. The application of transient equivalence resulted in speed-up of emulation by 2.875 and reduced the memory utilization by 65%. The emulation-based soft error detection achieved significant speed-up of the order of 106 as compared to a customized simulation-based method.  相似文献   

17.
This paper deals with the derivation and optimization of an iterative receiver architecture performing joint multiuser decoding and channel estimation. We consider an asynchronous multirate convolutional coded DS-CDMA system that communicates over quasi-static flat Rayleigh fading channels. The proposed receiver is derived within the space-alternating generalized expectation-maximization (SAGE) framework in connection with the noise-splitting approach. The used theoretical framework guarantees convergence of the receiver, as opposed to many other iterative receiver structures. Furthermore, the noise-splitting approach provides a set of noise-weighting coefficients that can be optimized under weak constraints. The inputs to the single-user decoders are linear combinations of two kinds of soft values with weights determined by the noise-weighting coefficients. These two kinds of soft values can be interpreted as a priori information and extrinsic information, respectively, if the channels are known. In the case of unknown channels, they are asymptotically a priori and asymptotically extrinsic, i.e., they become a priori and extrinsic when the length of the observed frame tends to infinity. In most cases, the optimum coefficients lead to extrinsic or asymptotically extrinsic values fed to the input of the single-user decoders. Monte Carlo simulations show that the proposed receiver is resistant to channel estimation errors and supports high system loads.  相似文献   

18.
In this paper, we propose two methods for blind estimation of modulation index of full‐response binary continuous phase modulation (CPM) scheme. Most of previous works assume that there is a single sample for each symbol; thus, they cannot improve their performance when sampling rate is bigger than symbol rate. In our proposed methods, modulation index is estimated using samples of autocorrelation function of the received signal, which is a nonlinear function of modulation index. We propose Taylor expansion for approximating this nonlinear function by a linear function. Then, by choosing some samples of the autocorrelation function, we estimate the modulation index with the least square (LS) estimator. For decreasing estimation errors, in the second proposed method, we apply the statistical properties of the autocorrelation function estimation errors, to design the best linear unbiased estimation (BLUE) estimator. The numerical performance analysis in terms of mean‐squared error (MSE) and bit error rate (BER) shows that the proposed methods outperform the reference methods in accurate estimation of the modulation index.  相似文献   

19.
卜登立 《电子学报》2018,46(12):3060-3067
采用基于信号概率的功耗计算模型进行MPRM(Mixed Polarity Reed-Muller)电路功耗优化,信号概率计算是功耗计算的关键.提出一种基于概率表达式的MPRM电路功耗计算方法.该方法兼顾信号概率计算的时间效率和准确性,对MPRM电路中不存在空间相关性的信号通过在电路中传播信号概率的方式计算其信号概率,存在空间相关性的信号则利用概率表达式计算其信号概率,并在电路中传播概率表达式以解决空间相关性问题,在此基础之上根据基于信号概率建立的解析动态功耗和静态功耗计算模型计算电路功耗.为进一步提高时间效率,该方法采用二元矩图表示概率表达式.使用基准电路对所提出方法进行了验证,并与其他采用不同信号概率计算方法的MPRM电路功耗计算方法进行了比较.结果表明所提出方法准确有效.  相似文献   

20.
The estimation of average-power dissipation of a circuit through exhaustive simulation is impractical due to the large number of primary inputs and their combinations. In this work, two algorithms based on least square estimation are proposed for determining the average power dissipation in complementary metal-oxide-semiconductor (CMOS) circuits. Least square estimation converges faster by attempting to minimize the mean square error value during each iteration. Two statistical approaches namely, the sequential least square (SLS) estimation and the recursive least square estimation are investigated. The proposed methods are distribution independent in terms of the input samples, unbiased and point estimation based. Experimental results presented for the MCNC'91 and the ISCAS'89 benchmark circuits show that the least square estimation algorithms converge faster than other statistical techniques such as the Monte Carlo method and the DIPE  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号