首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Low power fault tolerance design techniques trade reliability to reduce the area cost and the power overhead of integrated circuits by protecting only a subset of their workload or their most vulnerable parts. However, in the presence of faults not all workloads are equally susceptible to errors. In this paper, we present a low power fault tolerance design technique that selects and protects the most susceptible workload. We propose to rank the workload susceptibility as the likelihood of any error to bypass the logic masking of the circuit and propagate to its outputs. The susceptible workload is protected by a partial Triple Modular Redundancy (TMR) scheme. We evaluate the proposed technique on timing-independent and timing-dependent errors induced by permanent and transient faults. In comparison with unranked selective fault tolerance approach, we demonstrate a) a similar error coverage with a 39.7% average reduction of the area overhead or b) a 86.9% average error coverage improvement for a similar area overhead. For the same area overhead case, we observe an error coverage improvement of 53.1% and 53.5% against permanent stuck-at and transition faults, respectively, and an average error coverage improvement of 151.8% and 89.0% against timing-dependent and timing-independent transient faults, respectively. Compared to TMR, the proposed technique achieves an area and power overhead reduction of 145.8% to 182.0%.  相似文献   

2.
SHA-256安全散列算法广泛应用于数据完整性校验及数字签名等领域.为满足安全SoC系统对SHA-256高工作频率和低硬件成本的设计需求,提出了一种新颖的SHA-256 VLSI实现方法,通过分解算法实现步骤,进而缩短关键路径,节省硬件资源.采用SMIC 0.13μm CMOS工艺综合实现,结果表明其最高工作频率达334.5MHz,资源消耗减少了70%.  相似文献   

3.
SHA是由美国国家安全局(NSA)设计的安全杂凑算法.该算法主要应用在通讯完整性验证以及数字签名认证领域.以面积优化为目标,从系统设计入手到模块级设计,以具体设计为实例,在智能卡芯片中以较小的面积代价实现了SHA-1算法,对于类似的杂凑算法设计具有普遍的参考价值.  相似文献   

4.
朱宁龙  戴紫彬  张立朝  赵峰 《微电子学》2015,45(6):777-780, 784
针对当前国内外杂凑算法标准和应用需求不同的现状,采用数据流可重构的设计思想和方法,在对SM3及SHA-2系列杂凑算法的不同特征进行分析研究的基础上,总结归纳出统一的处理模型,进而设计了一种新的硬件结构。基于该结构,根据不同环境对杂凑算法安全强度的不同要求,可以单独灵活地实现SM3,SHA-256,SHA-384及SHA-512算法。实验结果表明,设计的硬件电路有效降低了硬件资源消耗,提高了系统吞吐率,能够满足国内外商用杂凑算法的要求。  相似文献   

5.
郭旭峰  于芳  刘忠立 《电子学报》2013,41(7):1371-1377
 现有存储器内建自修复方法要么遍历式地址比较效率低,要么并行地址比较功耗高,都不适用于大故障数存储器.对此,本文提出一种高效的存储器内建自修复方法,该方法对占故障主体的单元故障地址以哈希表形式进行存储,以利用哈希表的快速搜索特性提升地址比较效率.本文方法修复后的存储器在1个时钟周期内即可完成地址比较,修复后存储器性能不受任何影响,与目前广泛采用的基于CAM的方法处于同一水平,但功耗方面却具有明显优势.计算机模拟实验表明,对于512×512×8bits的存储器在同等冗余开销的情况下本文方法修复率相对于ESP方法平均提高了32.25%.  相似文献   

6.
Processor cores embedded in systems-on-a-chip (SoCs) are often deployed in critical computations, and when affected by faults they may produce dramatic effects. When hardware hardening is not cost-effective, software implemented hardware fault tolerance (SIHFT) can be a solution to increase SoCs’ dependability, but it increases the time for running the hardened application, as well as the memory occupation. In this paper we propose a method that eliminates the memory overhead, by exploiting a new approach to instruction hardening and control flow checking. The proposed method hardens an application online during its execution, without the need for introducing any change in its source code, and is non-intrusive, since it does not require any modification in the main processor’s architecture. The method has been tested with two widely used architectures: a microcontroller and a RISC processor, and proven to be suitable for hardening SoCs against transient faults and also for detecting permanent faults.  相似文献   

7.
一种基于循环展开结构的SHA-1算法实现   总被引:1,自引:0,他引:1  
哈希算法在信息安全领域主要应用于验证数据完整性和签名认证。通过对SHA-1算法进行深入分析,提出了一种快速实现此算法的硬件方案。该方案改变了标准算法中的迭代结构,减少消息处理时钟周期数,进而提高吞吐量。与其他IP)核相比,该设计在面积、频率和吞吐量等方面表现出了较强的优势。  相似文献   

8.
In this brief, we propose two new concurrent error-detection (CED) schemes for a class of sorting networks, e.g., odd-even transposition, bitonic, and perfect shuffle sorting networks. A probabilistic method is developed to analyze the fault coverage, and the hardware overhead is evaluated. We first propose a CED scheme by which all errors caused by single faults in a concurrent checking sorting network can be detected. This scheme is the first one available to use significantly less hardware overhead than duplication without compromising throughput. From this scheme, we develop another fault detection scheme which sharply reduces the hardware overhead (using an additional 10%~30% hardware) but still achieves virtually 1001 fault coverage  相似文献   

9.
10.
Concurrent error detection (CED) based on time redundancy entails performing the normal computation and the re-computation at different times and then comparing their results. Time redundancy implemented can only detect transient faults. We present two algorithm-level time-redundancy-based CED schemes that exploit register transfer level (RTL) implementation diversity to detect transient and permanent faults. At the RTL, implementation diversity can be achieved either by changing the operation-to-operator allocation or by shifting the operands before re-computation. By exploiting allocation diversity and data diversity, a stuck-at fault will affect the two results in two different ways. The proposed schemes yield good fault detection probability with very low area overhead. We used the Synopsys behavior complier (BC), to validate the schemes.  相似文献   

11.
The EnRUPT hash functions were proposed by O??Neil, Nohl and Henzen as candidates for the SHA-3 competition, organised by NIST. The proposal contains seven concrete hash functions, each with a different digest length. We present a practical collision attack on each of these seven EnRUPT variants. The time complexity of our attack varies from 236 to 240 round computations, depending on the EnRUPT variant, and the memory requirements are negligible. We demonstrate that our attack is practical by giving an actual collision example for EnRUPT-256.  相似文献   

12.
High-speed and low area hardware architectures of the Whirlpool hash function are presented in this paper. A full Look-up Table (LUT) based design is shown to be the fastest method by which to implement the non-linear layer of the algorithm in terms of logic. An unrolled Whirlpool architecture implemented on the Virtex XC4VLX100 device achieves a throughput of 4.9 Gbps. This is faster than a SHA-512 design implemented on the same device and other previously reported hash function architectures. A low area iterative architecture, which utilises 64-bit operations as opposed to full 512-bit operations, is also described. It runs at 430 Mbps and occupies 709 slices on a Virtex X4VLX15. This proves to be one of the smallest 512-bit hash function architectures currently available.  相似文献   

13.
崔东岳  龙兵  曾浩  向川云 《电子质量》2010,(12):19-21,28
安全散列算法是一种常用的加密算法,在信息安全领域得到了广泛应用。该文通过设计硬件电路,建立SHA-1算法的模型并实现。在FPGA中实现SHA-1算法时采取并行处理的方法,对算法的实现流程进行了优化,通过模块化设计,缩短了算法实现的周期,减少了存储资源的占用。最后进行综合和仿真,验证了算法实现的正确性。  相似文献   

14.
This paper proposes a pure software technique "error detection by duplicated instructions" (EDDI), for detecting errors during usual system operation. Compared to other error-detection techniques that use hardware redundancy, EDDI does not require any hardware modifications to add error detection capability to the original system. EDDI duplicates instructions during compilation and uses different registers and variables for the new instructions. Especially for the fault in the code segment of memory, formulas are derived to estimate the error-detection coverage of EDDI using probabilistic methods. These formulas use statistics of the program, which are collected during compilation. EDDI was applied to eight benchmark programs and the error-detection coverage was estimated. Then, the estimates were verified by simulation, in which a fault injector forced a bit-flip in the code segment of executable machine codes. The simulation results validated the estimated fault coverage and show that approximately 1.5% of injected faults produced incorrect results in eight benchmark programs with EDDI, while on average, 20% of injected faults produced undetected incorrect results in the programs without EDDI. Based on the theoretical estimates and actual fault-injection experiments, EDDI can provide over 98% fault-coverage without any extra hardware for error detection. This pure software technique is especially useful when designers cannot change the hardware, but they need dependability in the computer system. To reduce the performance overhead, EDDI schedules the instructions that are added for detecting errors such that "instruction-level parallelism" (ILP) is maximized. Performance overhead can be reduced by increasing ILP within a single super-scalar processor. The execution time overhead in a 4-way super-scalar processor is less than the execution time overhead in the processors that can issue two instructions in one cycle  相似文献   

15.
校正了CANS2010会议上Minier等人关于GrФstl区分器的分析结果,改进了GrФstl算法中压缩函数的积分区分器,充分利用渗透技术首次提出了关于P函数和Q函数的11轮积分区分器。虽然针对散列函数的分析是目前SHA3研究的主流,但是所提出的关于积分区分器的研究反映了压缩函数的随机性,对新的散列函数的设计具有重要意义。  相似文献   

16.
This paper presents a quantitative reliability analysis of a system designed to tolerate both hardware and software faults. The system achieves integrated fault tolerance by implementing N-version programming (NVP) on redundant hardware. The system analysis considers unrelated software faults, related software faults, transient hardware faults, permanent hardware faults, and imperfect coverage. The overall model is Markov in which the states of the Markov chain represent the long-term evolution of the system-structure. For each operational configuration, a fault-tree model captures the effects of software faults and transient hardware faults on the task computation. The software fault model is parameterized using experimental data associated with a recent implementation of an NVP system using the current design paradigm. The hardware model is parameterized by considering typical failure rates associated with hardware faults and coverage parameters. The authors results show that it is important to consider both hardware and software faults in the reliability analysis of an NVP system, since these estimates vary with time. Moreover, the function for error detection and recovery is extremely important to fault-tolerant software. Several orders of magnitude reduction in system unreliability can be observed if this function is provided promptly  相似文献   

17.
Carry checking/parity prediction adders and ALUs   总被引:1,自引:0,他引:1  
In this paper, we present efficient self-checking implementations valid for all existing adder and arithmetic and logic unit (ALU) schemes (e.g., ripple carry, carry lookahead, skip carry schemes). Among all the known self-checking adder and ALU designs, the parity prediction scheme has the advantage that it requires the minimum hardware overhead for the adder/ALU and the minimum hardware overhead for the other data-path blocks. It also has the advantage to be compatible with memory systems checked by parity codes. The drawback of this scheme is that it is not fault secure for single faults. The scheme proposed in this work has all the advantages of the parity prediction scheme. In addition, the new scheme is totally self-checking for single faults. Thus, the new scheme is substantially better than any other known solution.  相似文献   

18.
This paper presents a novel asynchronous design approach for multiple input multiple output (MIMO) satellite communication (SatCom) systems. One of the main challenges for MIMO SatCom systems is that these are prone to transient faults that typically are attributable to radiation hazards. Hence, instead of using conventional synchronous circuits, we conceive our design using asynchronous circuits since it inherently has a high tolerance to transient fault. Additionally, we adopt accelerated dual paths (ADP) design into our system. By carefully arranging the data flow between the two paths, the ADP design approach can help to further accelerate the asynchronous system and increase the reliability of the system by circumventing transient faults induced delay, as well as tolerating latch-ups and other permanent faults. The numerical results show that this design approach provides promising results. For example, the proposed design can decrease the delay overhead of the entire system from 43.5 to 19.8 % at the fault rate of 400/clock cycle.  相似文献   

19.
In December of 2010 NIST selected five SHA-3 finalists - BLAKE, Grøstl, JH, Keccak, and Skein to advance to the third (and final) round of the SHA-3 competition. At present most specialists and scholars focus on the design and the attacks on these hash functions. However, it is very significant to study some properties of their primitives and underlying permutations. Because some properties reflect the pseudo-randomness of the structures. Moreover, they help us to find new cryptanalysis for some block cipher structures. In this paper, we analyze the resistance of JH and Grøstl-512 against structural properties built on integral distinguishers. And then 31.5 (out of 42) rounds integral distinguishers for JH compression function and 11.5 (out of 14) rounds for Grøstl-512 compression function are presented.  相似文献   

20.
This paper presents an approach for increasing the lifetime of systems implemented on SRAM-based FPGAs, by introducing fault tolerance properties enabling the system to autonomously manage the occurrence of both transient and permanent faults. On the basis of the foreseen mission time and application environment, the designer is supported in the implementation of a system able to reconfigure itself, either by reloading the correct configuration in case of transient faults, or by relocating part of the functionality in presence of permanent faults. The result is a system implementation offering good performance and correct functionality even when faults occur. The proposed approach is evaluated in a case study to highlight the overall characteristics of the final implementation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号