首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Due to the continuous technology scaling, soft error becomes a major reliability issue at nanoscale technologies. Single or multiple event transients at low levels can result in multiple correlated bit flips at logic or higher abstraction levels. Addressing this correlation is essential for accurate low-level soft error rate estimation, and more importantly, for the cross-level error abstraction, e.g. from bit errors at logic level to word errors at register-transfer level. This paper proposes a novel error estimation method to take into consideration both signal and error correlations. It unifies the treatment of error-free signals and erroneous signals, so that the computation of error probabilities and correlations can be carried out using techniques for signal probabilities and correlations calculation. The proposed method not only reports accurate error probabilities when internal gates are impaired by soft errors, but also gives quantification of the error correlations in their propagation process. This feature enables our method to be a versatile technique used in high-level error estimation. The experimental results validate our proposed technique showing that compared with Monte-Carlo simulation, it is 5 orders of magnitude faster, while the average inaccuracy of error probability estimation is only 0.02.  相似文献   

2.
3.
As devices and operating voltages are scaled down, future circuits will be plagued by higher soft error rates, reduced noise margins and defective devices. A key challenge for the future is retaining high reliability in the presence of faulty devices and noise. Probabilistic computing offers one possible approach. In this paper we describe our approach for mapping circuits onto CMOS using principles of probabilistic computation. In particular, we demonstrate how Markov random field elements may be built in CMOS and used to design combinational circuits running at ultra low supply voltages. We show that with our new design strategy, circuits can operate in highly noisy conditions and provide superior noise immunity, at reduced power dissipation. If extended to more complex circuits, our approach could lead to a paradigm shift in computing architecture without abandoning the dominant silicon CMOS technology.  相似文献   

4.
5.
The newly adopted scalable extension of H.264/AVC video coding standard (SVC) demonstrates significant improvements in coding efficiency in addition to an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. Due to the complicated hierarchical prediction structure of the SVC and the concept of key pictures, content-aware rate adaptation of SVC bit streams to intermediate bit rates is a nontrivial task. The concept of quality layers has been introduced in the design of the SVC to allow for fast content-aware prioritized rate adaptation. However, existing quality layer assignment methods are suboptimal and do not consider all network abstraction layer (NAL) units from different layers for the optimization. In this paper, we first propose a technique to accurately and efficiently estimate the quality degradation resulting from discarding an arbitrary number of NAL units from multiple layers of a bitstream by properly taking drift into account. Then, we utilize this distortion estimation technique to assign quality layers to NAL units for a more efficient extraction. Experimental results show that a significant gain can be achieved by the proposed scheme.   相似文献   

6.
Erasing and programming are achieved in the device through electron tunneling. In order to inhibit the programming to unselected cells, the unselected bit lines and word lines are applied with program-inhibiting voltages. The number of parity bits for error checking and correction (ECC) is fiver per 2 bytes, which are controlled by the lower byte (LB) signal. Using a conventional 1.5 μm design rule n-well CMOS process with a single metal layer and two polysilicon layers, the memory cell size is 7×8 μm2 and the chip size is 5.55×7.05 mm2. The chip size is reduced to 70% of a full-featured electrically erasable programmable ROM (EEPROM) with on-chip ECC  相似文献   

7.
For purposes of simulating contemporary communication systems, it is, in many cases, useful to apply error models for specific levels of abstraction. Such models should approximate the packet error behavior of a given system at a specific protocol layer, thus incorporating the possible detrimental effects of lower protocol layers. Packet error models can efficiently be realized using finite-state models; for example, there exists a wide range of studies on using Markov models to simulate communication channels. In this paper, we consider aggregated Markov processes, which are a subclass of hidden Markov models (HMMs). Artificial limitations are set on the state transition probabilities of the models to find efficient methods of parameter estimation. We apply these models to the simulation of the performance of digital video broadcasting-handheld (DVB-H). The parameters of the packet error models are approximated as functions of the time-variant received signal strength and speed of a mobile vehicular DVB-H receiver, and it is shown that useful results may be achieved with the described packet error models, particularly when simulating mobile reception in field conditions.  相似文献   

8.
The International Standards Organization's (ISO's) seven-layer reference model for computer communication protocols is discussed. The layering arises out of the stratification of functions across the layers. The various protocol functions performed at each higher layer are examined in detail, identifying the appropriate procedures that affect the efficient operation of these protocols over satellite links for different ranges of transmission speeds and bit error rates. Appropriate modifications are presented for removing the possible degradation and improving the performance. Specifically, changes are discussed in Transport Protocol Class 4 (ISO 8073), Session Protocol (ISO 8327), and the File Transfer Access, and Management (FTAM) Protocol (ISO 8571)  相似文献   

9.
This paper presents a 5-Gb/s clock and data recovery (CDR) circuit which implements a calibration circuit to correct static phase offsets in a linear phase detector. Static phase offsets directly reduce the performance of CDR circuits as the incoming data is not sampled at the center of the eye. Process nonidealities can cause static phase offsets in linear phase detectors by adversely affecting the circuits in a way which is difficult to design for, making calibration an attractive solution. Both the calibration algorithm and test chip implementation are described and measured results are presented. The CDR circuit was fabricated in a 0.18-mum, six metal layer standard CMOS process. With a pseudorandom bit sequence of 27 - 1 calibration improved the measured bit error rate from 4.6 x 10-2 to less than 10-13.  相似文献   

10.
11.
An approximate analytical formulation of the resource allocation problem for handling variable bit rate multiclass services in a cellular direct sequence code-division multiple-access (DS-CDMA) system is presented. The novelty in this paper is that all grade-of-service (GoS) or quality-of-service (QoS) requirements at the connection level, packet level, and link layer are satisfied simultaneously, instead of being satisfied at the connection level or at the link layer only. The analytical formulation shows how the GoS/QoS in the different layers are intertwined across the layers. A complete sharing (CS) scheme with guard channels is used for the resource sharing policy at the connection level. The CS model is solved using a K-dimensional Markov chain. Numerical results illustrate that significant gain in system utilization is achieved through the joint coupling of connection/packet levels and link layer. This can translate to more revenues for network providers and/or lower charges for mobile users.  相似文献   

12.
With increasing levels of integration of multiple processing cores and new features to support software functionality, recent generations of microprocessors face difficult validation challenges. The systematic validation approach starts with defining the correct behaviors of the hardware and software components and their interactions. This requires new modeling paradigms that support multiple levels of abstraction. Mutual consistency of models at adjacent levels of abstraction is crucial for manual refinement of models from the full chip level to production register transfer level, which is likely to remain the dominant design methodology of complex microprocessors in the near future. In this paper, we present microprocessor modeling and validation environment (MMV), a validation environment based on metamodeling, that can be used to create models at various abstraction levels and to generate most of the important validation collaterals, viz., simulators, checkers, coverage, and test generation tools. We illustrate the functionalities in MMV by modeling a 32-bit reduced instruction set computer processor at the system, instruction set architecture, and microarchitecture levels. We show by examples how consistency across levels is enforced during modeling and also how to generate constraints for automatic test generation.  相似文献   

13.
An efficient algorithm for locating soft and hard failures in WDM networks   总被引:8,自引:0,他引:8  
Fault identification and location in optical networks is hampered by a multitude of factors: the redundancy and the lack of coordination (internetworking) of the managements at the different layers (WDM, SDH/SONET, ATM, IP); the large number of alarms a single failure can trigger; the difficulty in detecting some failures; and the resulting need to cope with missing or false alarms. Moreover, the problem of multiple fault location is NP-complete, so that the processing time may become an issue for large meshed optical networks. We propose an algorithm for locating multiple failures at the physical layer of a WDM network. They can be either hard failures, that is, unexpected events that suddenly interrupt the established channels; or soft failures, that is, events that progressively degrade the quality of transmission; or both, hard failures are detected at the WDM layer. Soft failures can sometimes be detected at the optical layer if proper testing equipment is deployed, but often require performance monitoring at a higher layer (SDH, ATM, or IP). Both types of failures, and both types of error monitoring, are incorporated in our algorithm, which is based on a classification and abstraction of the components of the optical layer and of the upper layer. Our algorithm does not rely on timestamps nor on failure probabilities, which are difficult to estimate and to use in practice. Moreover, our algorithm also handles missing and false alarms. The nonpolynomial computational complexity of the problem is pushed ahead into a precomputational phase, which is done off-line, when the optical channels are set up or cleared down. This results in fast on-line location of the failing components upon reception of the ringing alarms.  相似文献   

14.
The only way to keep pace with Moore’s Law is to use probabilistic computing for memory design. Probabilistic computing is ‘unavoidable’, especially when scaled memory dimensions go down to the levels where variability takes over. In order to print features below 20 nm, novel lithographies such as Extreme Ultra Violet (EUV) are required. However, transistor structures and memory arrays are strongly affected by pattern roughness caused by the randomness of such lithography, leading to variability induced data errors in the memory read-out. This paper demonstrates a probabilistic–holistic look at how to handle bit errors of NAND Flash memory and trades-off between lithography processes and error-correcting codes to ensure the data integrity.  相似文献   

15.
CMOS数字电路低功耗的层次化设计   总被引:1,自引:1,他引:0  
随着芯片上可以集成越来越多的管子,电路规模在不断扩大,工作频率在不断提高,这直接导致芯片功耗的迅速增长,无论是从电路可靠性来看,还是从能量受限角度来讲,低功耗都已成为CMOS数字电路设计的重要内容。由于不同设计抽象层次对电路功耗的影响不同,对各有侧重的低功耗设计方法和技术进行了讨论,涉及到工艺,版图,电路,逻辑,结构,算法和系统等不同层次。在实际设计中,根据具体应用环境,综合不同层次全面考虑功耗问题,可以明显降低电路功耗。  相似文献   

16.
Low power consumption is a key design metric for portable wireless network devices where battery energy is a limited resource. The resultant energy efficient design problem can be addressed at various levels of system design, and indeed much research has been done for hardware power optimization and power management within a wireless device. However, with the increasing trend towards thin client type wireless devices that rely more and more on network based services, a high fraction of power consumption is being accounted for by the transport of packet data over wireless links [28]. This offers an opportunity to optimize for low power in higher layer network protocols responsible for data communication among multiple wireless devices. Consider the data link protocols that transport bits across the wireless link. While traditionally designed around the conventional metrics of throughput and latency, a proper design offers many opportunities for optimizing the metric most relevant to battery operated devices: the amount of battery energy consumed per useful user level bit transmitted across the wireless link. This includes energy spent in the physical radio transmission process, as well as in computation such as signal processing and error coding. This paper describes how energy efficiency in the wireless data link can be enhanced via adaptive frame length control in concert with adaptive error control based on hybrid FEC (forward error correction) and ARQ (automatic repeat request). Key to this approach is a high degree of adaptivity. The length and error coding of the atomic data unit (frame) going over the air, and the retransmission protocol are (a) selected for each application stream (ATM virtual circuit or IP/RSVP flow) based on quality of service (QoS) requirements, and (b) continually adapted as a function of varying radio channel conditions due to fading and other impairments. We present analysis and simulation results on the battery energy efficiency achieved for user traffic of different QoS requirements, and describe hardware and software implementations.  相似文献   

17.
Power consumption and heat dissipation are becoming the major factors that limit the performance evolution of current state-of-the-art microprocessors. As they become key elements in the design of both high performance computers and battery powered devices, different power and thermal management strategies have been proposed and implemented during the last years in order to overcome this performance limitation. Considering that software applications have a large impact on power consumption and thermal map of the CPU cores, these design strategies tend to be addressed at higher levels even as they are usually implemented at lower levels of systems abstraction. The work presented in this paper evaluates the relation between power consumption and thermal response of CPU cores when different software applications are executed. The goal of this study is to identify how software applications can be used in thermal management process and whether it is feasible to implement thermal-aware software applications.  相似文献   

18.
The increasing number of interconnect layers that are needed in a CMOS process to meet the routing and power requirements of large digital circuits also yield significant advantages for analog applications. The reverse thickness scaling of the top metal layer can be exploited in the design of low-loss transmission lines. Coplanar transmission lines in the top metal layers take advantage of a low metal resistance and a large separation from the heavily doped silicon substrate. They are therefore fully compatible with current and future CMOS process technologies. To investigate the feasibility of extending CMOS designs beyond 10 GHz, a wide range of coplanar transmission lines are characterized. The effect of the substrate resistivity on coplanar wave propagation is explained. After achieving a record loss of 0.3 dB/mm at 50 GHz, coplanar lines are used in the design of distributed amplifiers and oscillators. They are the first to achieve higher than 10 GHz operating frequencies in a conventional CMOS technology  相似文献   

19.
The author proposes a software reliability model for a large real-time telecommunications software architecture. Some simple examples of the critical components of the software architecture and their dependencies are described. The component dependencies permit the propagation of faults from the component in which the fault originates to the other components. This propagation can cause failures in the chain (or in the tree) of components. Detection and failures depends on the tests executed or on the number and type of customer requests. An error can occur in any component. This error can be caused by a fault that propagated from another component or it can be a fault that originates in that component. The error can be traced through the component-dependency chain (or tree) to repair all the faults that are associated with that error. The software reliability model guides the design of the software architecture  相似文献   

20.
Design reuse requires engineers to determine whether or not an existing block implements desired functionality. If a common high-level circuit model is used to represent components that are described at multiple levels of abstraction, comparisons between circuit specifications and a library of potential implementations can be performed accurately and quickly. A mechanism is presented for compactly specifying circuit functionality as polynomials at the word level. Polynomials can be used to represent circuits that are described at the bit level or arithmetically. Furthermore, in representing components as polynomials, differences in precision between potential implementations can be detected and quantified. We present a mechanism for constructing polynomial models for combinational and sequential circuits. Furthermore, we derive a means of approximating the functionality of nonpolynomial functions and determining a bound on the error of this approximation. These methods have been implemented in the POLYSYS synthesis tool and used to synthesize a JPEG encode block and infinite impulse response filter from a library of complex elements  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号