首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 937 毫秒
1.
This paper presents a technique to correct multiple logic design errors in a gate-level netlist. A number of methods have been proposed for correcting single logic design errors. However, the extension of these methods to more than one error is still very limited. We direct our attention to circuits with a low multiplicity of errors. By assuming different error dependency scenarios, multiple errors are corrected by repeatedly applying a single error search and correction algorithm. Experimental results on correcting double-design errors and triple-design errors on ISCAS and MCNC benchmark circuits are included  相似文献   

2.
We investigate an automated design validation scheme for gate-level combinational and sequential circuits that borrows methods from simulation and test generation for physical faults, and verifies a circuit with respect to a modeled set of design errors. The error models used in prior research are examined and reduced to five types: gate substitution errors (GSEs), gate count errors (GCEs), input count errors (ICEs), wrong input errors (WIEs), and latch count errors (LCEs). Conditions are derived for a gate to be testable for GSEs, which lead to small, complete test sets for GSEs; near-minimal test sets are also derived for GCEs. We analyze undetectability in design errors and relate it to single stuck-line (SSL) redundancy. We show how to map all the foregoing error types into SSL faults, and describe an extensive set of experiments to evaluate the proposed method. These experiments demonstrate that high coverage of the modeled errors can be achieved with small test sets obtained with standard test generation and simulation tools for physical faults.  相似文献   

3.
As the technology scales down, shrinking geometry and layout dimension, on- chip interconnects are exposed to different noise sources such as crosstalk coupling, supply voltage fluctuation and temperature variation that cause random and burst errors. These errors affect the reliability of the on-chip interconnects. Hence, error correction codes integrated with noise reduction techniques are incorporated to make the on-chip interconnects robust against errors. The proposed error correction code uses triplication error correction scheme as crosstalk avoidance code (CAC) and a parity bit is added to it to enhance the error correction capability. The proposed error correction code corrects all the error patterns of one bit error, two bit errors. The proposed code also corrects 7 out of 10 possible three bit error patterns and detects burst errors of three. Hybrid Automatic Repeat Request (HARQ) system is employed when burst errors of three occurs. The performance of the proposed codec is evaluated for residual flit error rate, codec area, power, delay, average flit latency and link energy consumption. The proposed codec achieves four magnitude order of low residual flit error rate and link energy minimization of over 53 % compared to other existing error correction schemes. Besides the low residual flit error rate, and link energy minimization, the proposed codec also achieves up to 4.2 % less area and up to 6 % less codec power consumption compared to other error correction codes. The less codec area, codec power consumption, low link energy and low residual flit error rate make the proposed code appropriate for on chip interconnection link.  相似文献   

4.
We show that the problem of signal reconstruction from missing samples can be handled by using reconstruction algorithms similar to the Reed-Solomon (RS) decoding techniques. Usually, the RS algorithm is used for error detection and correction of samples in finite fields. For the case of missing samples of a speech signal, we work with samples in the field of real or complex numbers, and we can use FFT or some new transforms in the reconstruction algorithm. DSP implementation and simulation results show that the proposed methods are better than the ones previously published in terms of the quality of recovered speech signal for a given complexity. The burst error recovery method using the FFT kernel is sensitive to quantization and additive noise like the other techniques. However, other proposed transform kernels are very robust in correcting bursts of errors with the presence of quantization and additive noise  相似文献   

5.
The recent advancements in the implementation technologies have brought to the front a wide spectrum of new defect types and reliability phenomena. The conventional design techniques do not cope with the integration capacity and stringent requirements of today's nanometer technology nodes. Timing-critical paths analysis is one of such tasks. It has applications in gate-level reliability analysis, e.g., Bias Temperature Instability (BTI) induced aging, but also several others. In this paper, we propose a fast simulation based technique for explicit identification of true timing-critical paths in both combinational and sequential circuits to enable reliability mitigation approaches, like selecting the paths for delay monitor insertion, resizing delay critical gates or applying rejuvenation stimuli. The high scalability of the method is achieved by using a novel fast method for finding activated paths for many test patterns in parallel, a novel algorithm to determine only a small subset of critical paths, and a novel method for identifying the true critical paths among this subset, using branch and bound strategy. The paper demonstrates efficient application of the proposed technique to gate-level NBTI-critical paths identification. The experimental results prove feasibility and scalability of the technique.  相似文献   

6.
This paper describes the design and implementation of a fully monolithic 16-b, 1 Msample/s, low-power A/D converter (ADC). An on-chip 32-b custom microcontroller calibrates and corrects the pipeline linearity to within 0.75 LSB integral nonlinearity (INL) and 0.6 LSB differential nonlinearity (DNL). High speed and low power are achieved using a pipelined architecture. Errors resulting from capacitor mismatches, finite op-amp open loop gain, charge injection and comparator offset are removed through self-calibration. Coefficients determined during calibration are stored on chip, digitally correcting the pipeline ADC in real time during normal conversion, Full-scale errors are removed through self-calibration and an-chip multiplication. Linearity errors due to capacitor voltage coefficients are reduced using a curve fit algorithm and on-chip ROM. Digital cross-talk errors resulting from the microcontroller running at a rate of ten times the analog sampling rate have prevented implementations of fully monolithic converters of this performance class in the past. Mismatches in cross-talk due to different digital timing between calibration and correction lead to linearity errors at critical correction points. Experimental analysis and circuit techniques which overcome these problems are presented  相似文献   

7.
An error-correcting system for mobile radio data transmission with improved reliability and simple implementation is presented here. The new rate one-half code absolutely corrects two errors within 12 consecutive bits, while the (15, 7, 2) Bose-Chaudhuri-Hocquenquem (BCH) code corrects two errors within 15 bits and Hagelbarger's code corrects two errors within 14 bits. Error propagation in the feedback majority logic decoder is discussed, and it is proved empirically that the new code does not propagate infinite errors. In order to correct burst errors, a 12-column interleaving is proposed for fading channels.  相似文献   

8.
In software-defined networking (SDN), the controller relies on the information collected from the data plane for route planning, load balancing, and other functions. Statistics information is the most important kind of information among them, so the correctness of statistics information is the key to the proper operation of the network. Most of the current research on data plane focuses on policy consistency, rule redundancy, forwarding anomalies, and so on, and little attention is paid to whether the statistics information uploaded by the switches to the controller is correct. However, incorrect statistics information inevitably leads the controller to make wrong decisions. Therefore, this paper proposes an audit-based malicious information correction mechanism to address the problem of wrong statistics information uploaded by the switches. This mechanism audits the statistics information and locates malicious switches before uploading the statistics information to the controller. It identifies and corrects the statistics information errors by combining flow path and statistics information. We have performed simulations on Nsfnet, Abilene, and Fat-Tree, and the results show that our method can correct about 70% of the statistical information errors with less computational cost. To the best of our knowledge, this paper is the first malicious statistics information correction scheme for wildcard rules.  相似文献   

9.
10.
This article describes three aspects of asynchronous design from a Petri-net specification called asignal transition graph (STG). First, we show that the STG defined by Chu [1] is too restrictive for specifying general asynchronous behavior and propose extensions to the STG which allow for more general and compact representation. Second, we show that syntactic constraints on STGs are not sufficient to guarantee hazard-free implementations under the unbounded gate delay model, and present techniques to synthesize two-level implementations which are hazard-free under the multiple signal change condition. To remove all hazards under the multiple signal change condition, the initial specification may need to be modified. Finally, we show that behavior containment test using the event coordination model [2] is a powerful tool for the formal verification of asynchronous circuits. This verification method can provide sanity checks for all synthesis methods that use the unbounded gate delay model, and provides a mechanism for designers to validate some manual gate-level changes to the final design.  相似文献   

11.
In today’s complex and challenging VLSI design process, multiple logic errors may occur due to the human factor and bugs in CAD tools. The designer often faces the challenge of correcting an erroneous design implementation. This study describes a simulation-based logic debugging solution for combinational circuits corrupted with multiple design errors. Unlike other simulation-based techniques that identify all errors at once, the proposed method works incrementally. At each iteration of incremental debugging, a single candidate location is rectified with linear time algorithms. This is done so that the functionality of the erroneous design gradually matches the correct one. A number of theorems, heuristics and data structures help identify a single candidate solution at each iteration and they also guide the search in the large solution space. Experiments on benchmark circuits confirm the effectiveness of incremental logic debugging.  相似文献   

12.
Forward error correction (FEC) techniques are applied to INTELSAT intermediate data rate (IDR) services to reduce the transmitted power requirements relative to uncoded systems, thereby increasing the satellite transponder capacity. Properly designed FEC systems reduce data errors in the received digital stream with minimal impact to the protocols, operation and equipment involved with the communication system. This paper discusses the overall system impact of the addition of FEC to the IDR service. The appropriateness of the specific application of convolutional encoding combined with Viterbi decoding techniques is discussed. Basic concepts of convolutional encoding and Viterbi decoding are presented in order to provide understanding of the trade-offs involved in the specification of the coding technique for the IDR service. The details of the IDR FEC specification are presented. The implementation of a system based on a VLSI Viterbi decoder device which conforms to the INTELSAT requirements is described.  相似文献   

13.
Phase errors incurred by a surface acoustic wave propagating through or generated by an apodized interdigital array have been found to cause severe distortions of the filter response. The amount of error is a function of the piezoelectric coupling constant of the delay material and it is found that the distortions are most severe in high-coupling materials. A simple modification to the current method of array design is presented which corrects this phase error. Experimental results for a pulse-compression loop using apodized lithium-niobate surface-wave filters are presented which demonstrate the effectiveness of this method of phase correction.  相似文献   

14.
We present a computation reduction technique called computation sharing differential coefficient (CSDC) method, which can be used to obtain low-complexity multiplierless implementation of finite-impulse response (FIR) filters. It is also applicable to digital signal processing tasks involving multiplications with a set of constants. The main component of our proposed CSDC method is to combine the strength of the augmented differential coefficient approach and subexpression sharing. Exploring computation reuse through algorithmic equivalence, the augmented differential coefficient approach greatly expands the design space by employing both differences and sums of filter coefficients. The expanded design space is represented by an undirected and complete graph. The problem of minimizing the adder cost (the number of additions/subtractions) for a given filter is transformed into a problem of searching for an appropriate subexpression set that leads to a minimal adder cost. A heuristic search algorithm based on genetic algorithm is developed to search for low-complexity solutions over the expanded design space in conjunction with exploring subexpression sharing. It is shown that up to 70.1% reduction in the adder cost can be obtained over the conventional multiplierless implementation. Comparison with several existing techniques based on the available data shows that our method yields comparable results for multiplierless FIR filter implementation.  相似文献   

15.
The use of the structure of one-step decodable majority logic codes for enhanced and simplified vector symbol decoding, such as outer code decoding of concatenated codes, is proposed. For J equations checking a particular symbol, the technique to be described almost always corrects the symbol if there are J-1 or fewer symbol errors, and often corrects cases of far more than J symbol errors. Ordinarily, majority level decoding with J equations for a symbol corrects the symbol in all cases where there are up to [J/2] errors. The decoding power is comparable to Reed-Solomon codes, but decoding is simpler than for Reed-Solomon codes  相似文献   

16.
Logical initializability is the property of a gate-level circuit whereby it can be driven to a unique start state when simulated by a three-valued (0, 1, X) simulator. In practice, commercial logic and fault simulators often require initialization under such a three-valued simulation model. In this paper, the first sound and systematic synthesis method is proposed to ensure the logical initializability of synchronous finite-state machines. The method includes both state assignment and combinational logic synthesis steps. It is shown that a previous approach to synthesis-for-initializability, which uses a constrained state assignment method, may produce uninitializable circuits. Here, a new state assignment method is proposed that is guaranteed correct. Furthermore, it is shown that combinational logic synthesis also has a direct impact on initializability; necessary and sufficient constraints on combinational logic synthesis are proposed to guarantee that the resulting gate-level circuits are logically initializable. The above two synthesis steps have been incorporated into a computer-aided design tool, SALSIFY, targeted to both two-level and multilevel implementations  相似文献   

17.
针对网络中信息内容安全事件的发现问题,该文提出一种基于关联规则的多维度用户行为特征关联分析法;对于存在的虚警问题,提出了基于邦弗朗尼校正的检验准则;为满足在海量数据中的应用需求,提出了一种Map-Reduce框架下的分布式幂集Apriori算法。实验结果表明,该文提出的方法及相应算法,并行运算能力强,在低虚警率和漏检率的情况下,具有较好的检测率,且运行时间短,收敛速度快。  相似文献   

18.
In this paper a neural network method for optical proximity correction is presented. A non linear two-dimensional spatial inverse filter is proposed as deconvolutional factor over pattern plane. Test sets for the different correction strategies were prepared. Boltzmann machine neural network for training set preparation is proposed. For actual correction optimal autoassociative neural network configuration have been chosen. This strategy also opens the way to use a sub-resolution, non-printable correction structures. Additionally, such a system is available for interfacing with next step of correction - for electron proximity effects. Possibility of merging them in one unit were investigated. Some initial results concerning one step OPC and EPC (electron proximity correction) neurocorrector are presented. Different aggressiveness of the correction can be chosen straightforwardly by simple change of the correction kernels in neurocorrector. There is no structural difference of neurocorrector for attenuated, sized rim, Levenson's, outrigger or chromeless methods, thought different attitudes towards training tests generation have to be applied. Both, feature biasing and feature assisted techniques were investigated. Some attempts to quasi-analytical representation of the correction kernel to minimise or even avoid learning process is also presented. For the full power of artificial neural networks to be exploited from hardware implementation rather, initial considerations on VLSI architecture of the neurocorrector are also included.  相似文献   

19.
This brief proposes a new method for designing infinite-impulse response (IIR) filter with peak error and prescribed flatness constraints. It is based on the model reduction of a finite-impulse response function that satisfies the specification by extending a method previously proposed by Brandenstein. The proposed model-reduction method retains the denominator of the conventional techniques and formulates the optimal design of the numerator as a second-order cone programming problem. Therefore, linear and convex quadratic inequalities such as peak error constraints and prescribed number of zeros at the stopband for IIR filters can be imposed and solved optimally. Moreover, a method is proposed to express the denominator of the model-reduced IIR filter as a polynomial in integer power of z, which efficiently facilitates its polyphase implementation in multirate applications. Design examples show that the proposed method gives better performance, and more flexibility in incorporating a wide variety of constraints than conventional methods  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号