首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Providing correct reliability information is critical for all manufacturers for both their customers and also their in-house departments intended for product improvements. Solid state lighting (SSL) technology as the novel lighting technology is not an exception. Due to the fact that SSL is a relatively new technology, research on its reliability issues and also systematic methods for design improvement is lacking.In this paper, we introduce an approach for “Design for Reliability” in SSL. Our approach includes three major steps. The first step is the design phase where the physics of failure of the device and virtual assessment of the reliability of the device is investigated. Based on these results, their failure’s causes; and their relationship with reliability and lifetime of the SSL device is defined. Sensors are then assigned to the critical failure causes at the proper positions of the system. The second step of our approach is processing the data from sensors while device is functional. In this phase, gathered data from the sensors, are processed in order to calculate the lifetime of the device. The system should have a processing capability for accomplishing this task and also report the status to the maintenance system. The third step of our reliability approach is to have built-in self-maintenance capabilities. Therefore, the system can predict its failure and be functional for longer time.At the second part of this paper, we introduced a SSL device which has been designed in our group which can fulfill the criteria for our “Design for Reliability approach”. This SSL device can provide around 200 lumen and has the capability of sensing and monitoring the high temperature in system. High junction temperature is one of the major failure causes in SSL devices. Due to the additional flexibility that smart controller of this device gives us, self-maintenance solution can be implemented in this device.  相似文献   

2.
CMOS reliability is facing unprecedented challenges due to the continued scaling of device dimensions. To sustain the current scaling trends, it is imperative to understand the fundamental physics of failure mechanisms. Due to the inherent complexity of these mechanisms, some of the key failure mechanisms can be understood only by a numerical modeling approach. Most failure mechanisms have a characteristic time dependence to failure. Hence in this work, we use a numerical approach to investigate the time dependence of failure mechanism associated with interfacial kinetics at the Si/SiO2 interface. Several models are critically examined to develop a reaction/diffusion based modeling framework for predicting interface state generation. Our modeling shows reactions at the Si/SiO2 interface have a direct impact on the time dependence (or time slopes). These time kinetics predictions shed light on the underlying mechanisms behind an technologically important failure mechanism (negative bias temperature instability (NBTI)). In particular, the breaking of an interface SiH bond to release atomic H results in a time slope of 0.25, whereas the release of molecular H2 results in a time slope of 0.165. Based on this model, we conclude NBTI degradation is dominated by diffusion of neutral molecular hydrogen defects. These models are extended to 2D simulations to study device layout effects. Our simulations suggest differences with device structure (Lgate, Width etc.) and agree with observed experimental results. The developed models are further applied to understand operation under dynamic and static stress.  相似文献   

3.
This paper presents an approach to system reliability modeling where failures and errors are not statistically independent. The repetition of failures and errors until their causes are removed is affected by the system processes and degrades system reliability. Four types of failures are introduced: hardware transients, software and hardware design errors, and program faults. Probability of failure, mean time to failure, and system reliability depend on the type of failure. Actual measurements show that the most critical factor for system reliability is the time after occurrence of a failure when this failure can be repeated in every process that accesses a failed component. An example involving measurements collected in an IBM 4331 installation validates the model and shows its applications. The degradation of system reliability can be appreciable even for very short periods of time. This is why the conditional probability of repetition of failures is introduced. The reliability model allows prediction of system reliability based on the calculation of the mean time to failure. The comparison with the measurement results shows that the model with process dependent repetition of failures approximates system reliability with better accuracy than the model with the assumption of independent failures.  相似文献   

4.
The paper criticises the underlying assumptions which have been made in much early modeling of computer software reliability. The following suggestions will improve modeling. 1) Do not apply hardware techniques to software without thinking carefully. Software differs from hardware in important respects; we ignore these at our peril. In particular-2) Do not use MTTF, MTBF for software, unless certain that they exist. Even then, remember that- 3) Distributions are always more informative than moments or parameters; so try to avoid commitment to a single measure of reliability. Anyway- 4) There are better measures than MTTF. Percentiles and failure rates are more intuitively appealing than means. S) Software reliability means operational reliability. Who cares how many bugs are in a program? We should be concerned with their effect on its operation. In fact- 6) Bug identification (and elimination) should be separated from reliability measurement, if only to ensure that the measurers do not have a vested interest in getting good results. 7) Use a Bayesian approach and do not be afraid to be subjective. All our statements will ultimately be about our beliefs in the quality of programs. 8) Do not stop at a reliability analysis; try to model life-time utility (or cost) of programs. 9) Now is the time to devote effort to structural models. 10) Structure should be of a kind appropriate to software, e.g. top-down, modular.  相似文献   

5.
The lifetime of micro electro–thermo–mechanical actuators with complex electro–thermo–mechanical coupling mechanisms can be decreased significantly due to unexpected failure events. Even more serious is the fact that various failures are tightly coupled due to micro-size and multi-physics effects. Interrelation between performance and potential failures should be established to predict reliability of actuators and improve their design. Thus, a multiphysics modeling approach is proposed to evaluate such interactive effects of failure mechanisms on actuators, where potential failures are pre-analyzed via FMMEA (Failure Modes, Mechanisms, and Effects Analysis) tool for guiding the electro–thermo–mechanical-reliability modeling process. Peak values of temperature, thermal stresses/strains and tip deflection are estimated as indicators for various failure modes and factors (e.g. residual stresses, thermal fatigue, electrical overstress, plastic deformation and parameter variations). Compared with analytical solutions and experimental data, the obtained simulation results were found suitable for coupled performance and reliability analysis of micro actuators and assessment of their design.  相似文献   

6.
Random testing techniques have been extensively used in reliability assessment, as well as in debug testing. When used to assess software reliability, random testing selects test cases based on an operational profile; while in the context of debug testing, random testing often uses a uniform distribution. However, generally neither an operational profile nor a uniform distribution is chosen from the perspective of maximizing the effectiveness of failure detection. Adaptive random testing has been proposed to enhance the failure detection capability of random testing by evenly spreading test cases over the whole input domain. In this paper, we propose a new test profile, which is different from both the uniform distribution, and operational profiles. The aim of the new test profile is to maximize the effectiveness of failure detection. We integrate this new test profile with some existing adaptive random testing algorithms, and develop a family of new random testing algorithms. These new algorithms not only distribute test cases more evenly, but also have better failure detection capabilities than the corresponding original adaptive random testing algorithms. As a consequence, they perform better than the pure random testing.   相似文献   

7.
Considerable advances in the modeling and measurements of packet-switched networks have been made since this concept emerged in the late sixties. In this paper, we first review the modeling techniques that are most frequently used to study these packet transport networks; for each technique we provide a brief introduction, a discussion of its capabilities and limitations, and one or more representative applications. Next we review the basic measurement tools, their capabilities, their limitation, and their applicability to and implementation in different networks, namely land based wire networks, satellite networks, and ground packet radio networks; we also show the importance of well-designed experiments in satisfying the many measurement goals. Finally we discuss briefly some open problems for future research.  相似文献   

8.
《Microelectronics Reliability》2014,54(9-10):1963-1968
Knowledge and control of local stress development in Back-End-of-Line (BEoL) stacks and nearby Through Silicon Vias (TSVs) in advanced 3D integrated devices is a key to their thermo-mechanical reliability. The paper presents a combined simulation/measurement approach to evaluate stresses generated in the result of the TSV and BEoL stack manufacturing and 3D bonding processes. Stress measurement methods of high spatial resolution capability (microRaman and Focused Ion Beam (FIB) based stress release techniques) are used to obtain stress data from real components as manufactured. Finite Element Analysis (FEA) allows a more accurate interpretation of measurements results as well as a subsequent comprehensive analysis of failure behaviour. The paper gives an introduction to the applied local stress measurement on advanced multi-layer systems and 3D integration components referring to the state-of-art capabilities and limitations. The need of experimental stress data generation is illustrated on FEA examples. Illustration is given for FEA applications on 3D IC integration components currently lacking appropriate residual stress input for an assumed initial state.  相似文献   

9.
Failure correlation in software reliability models   总被引:4,自引:0,他引:4  
Perhaps the most stringent restriction in most software reliability models is the assumption of statistical independence among successive software failures. The authors research was motivated by the fact that although there are practical situations in which this assumption could be easily violated, much of the published literature on software reliability modeling does not seriously address this issue. The research work in this paper is devoted to developing the software reliability modeling framework that can consider the phenomena of failure correlation and to study its effects on the software reliability measures. The important property of the developed Markov renewal modeling approach is its flexibility. It allows construction of the software reliability model in both discrete time and continuous time, and (depending on the goals) to base the analysis either on Markov chain theory or on renewal process theory. Thus, their modeling approach is an important step toward more consistent and realistic modeling of software reliability. It can be related to existing software reliability growth models. Many input-domain and time-domain models can be derived as special cases under the assumption of failure s-independence. This paper aims at showing that the classical software reliability theory can be extended to consider a sequence of possibly s-dependent software runs, viz, failure correlation. It does not deal with inference nor with predictions, per se. For the model to be fully specified and applied to estimations and predictions in real software development projects, we need to address many research issues, e.g., the detailed assumptions about the nature of the overall reliability growth, way modeling-parameters change as a result of the fault-removal attempts  相似文献   

10.
The next-generation convergent microsystems, based on system-on-package (SOP) technology, require up-front system-level design-for-reliability approaches and appropriate reliability assessment methodologies to guarantee the reliability of digital, optical, and radio frequency (RF) functions, as well as their interfaces. Systems approach to reliability requires the development of: i) physics-based reliability models for various failure mechanisms associated with digital, optical, and RF Functions, and their interfaces in the system; ii) design optimization models for the selection of suitable materials and processing conditions for reliability, as well as functionality; and iii) system-level reliability models understanding the component and functional interaction. This paper presents the reliability assessment of digital, optical, and RF functions in SOP-based microsystems. Upfront physics-based design-for-reliability models for various functional failure mechanisms are presented to evaluate various design options and material selection even before the prototypes are made. Advanced modeling methodologies and algorithms to accommodate material length scale effects due to enhanced system integration and miniaturization are presented. System-level mixed-signal reliability is discussed thorough system-level reliability metrics relating component-level failure mechanisms to system-level signal integrity, as well as statistical aspects.  相似文献   

11.
The reliability assessment of electronic packages demands more accurate and efficient method for evaluating heterogeneous packages and their interconnects in various measurement conditions. The digital image correlation (DIC) method has been fully developed in the last decade. With proper improvement, this work demonstrates that DIC method has the ability to fulfill various experimental tasks and obtain the information for interconnect strain analysis, coefficient of thermal expansion (CTE) characterization, in-plane displacement and out-of-plane warpage quantification within one measurement. To some extent, it serves as a comprehensive tool for electronic packages' reliability assessment. Given that the DIC technique is new to the electronic packaging area, this work illustrates the principle of optical non-contact experiment method and presents several improvements to fit for the measurement on electronic packaging area. With these applications, it is foreseeable that the DIC method will play an important role in the reliability assessment of electronic packages.  相似文献   

12.
The measurement and prediction of the reliability of semiconductor devices is becoming increasingly more dependent upon an understanding of device physics. The present level of understanding tanding of device physics is, however, inadequate for many practical purposes. This paper reviews the present status of p-n junction theory, its relevance to device degradation, and the experimental techniques available for measuring intrinsic device properties. Limitations in the theory and additional development required to meet the needs of device reliability evaluation and prediction are considered. The role of modeling in evaluating device reliability is discussed.  相似文献   

13.
This paper presents modeling and estimation techniques permitting the temperature-aware optimization of application-specific multiprocessor system-on-chip (MPSoC) reliability. Technology scaling and increasing power densities make MPSoC lifetime reliability problems more severe. MPSoC reliability strongly depends on system-level MPSoC architecture, redundancy, and thermal profile during operation. We propose an efficient temperature-aware MPSoC reliability analysis and prediction technique that enables MPSoC reliability optimization via redundancy and temperature-aware design planning. Reliability, performance, and area are concurrently optimized. Simulation results indicate that the proposed approach has the potential to substantially improve MPSoC system mean time to failure with small area overhead.  相似文献   

14.
This paper presents a new method for incorporating imperfect FC (fault coverage) into a combinatorial model. Imperfect FC, the probability that a single malicious fault can thwart automatic recovery mechanisms, is important to accurate reliability assessment of fault-tolerant computer systems. Until recently, it was thought that the consideration of this probability necessitated a Markov model rather than the simpler (and usually faster) combinatorial model. SEA, the new approach, separates the modeling of FC failures into two terms that are multiplied to compute the system reliability. The first term, a simple product, represents the probability that no uncovered fault occurs. The second term comes from a combinatorial model which includes the covered faults that can lead to system failure. This second term can be computed from any common approach (e.g. fault tree, block diagram, digraph) which ignores the FC concept by slightly altering the component-failure probabilities. The result of this work is that reliability engineers can use their favorite software package (which ignores the FC concept) for computing reliability, and then adjust the input and output of that program slightly to produce a result which includes FC. This method applies to any system for which: the FC probabilities are constant and state-independent; the hazard rates are state-independent; and an FC failure leads to immediate system failure  相似文献   

15.
In this study, we present the thermal analysis and experimental performance assessment of an aluminum silicon carbide (AlSiC) metal matrix composite (MMC) base plate with integral cooling fins. By attaching a pin-finned base plate to an open-chambered flow-through heat sink, the mechanical interface between the base plate and cooling medium is eliminated. This reduces the overall thermal resistance and improves module reliability as compared with traditional base plate cooling schemes. Computational fluid dynamics and heat transfer techniques were employed to model the thermal and hydrodynamic resistance characteristics through the pin fin structure of a prototype base plate design. A unit-cell approach was employed to avoid the computational expense of modeling the entire pin array. Performance was verified experimentally in a closed loop test facility using water as the cooling fluid. It was found that the unit-cell approach produced good agreement with experimental pressure drop and heat transfer results  相似文献   

16.
A modeling approach to power amplifier design for implementation in OFDM radio units is presented. The power amplifier model assesses the impact of linear memory effects within the system using a Wiener representation, and employs a linear novel parametric estimation technique using Hilbert space. In addition, in order to model the nonlinear memory effects the previous topology is generalized by inserting the truncated Volterra filter before the static nonlinearity. Predistortion based on the Hammerstein model is introduced to deal with the nonlinear response. The new general algorithm is proposed to evaluate the Hammerstein model parameters for an OFDM system. A representative test bed was designed and implemented. The assessment of the new methods for PA and PD modeling are confirmed by experimental measurements. The measurement results reveal the preference of the new techniques over the existing approaches.  相似文献   

17.
Modeling the effect of reliability on performance   总被引:1,自引:0,他引:1  
In many high-reliability systems, subsystem or component failures that do not cause a system failure can still degrade system performance. When modeling such systems, ignoring the effect of reliability on performance can lead to incomplete or inaccurate results. In the present work, performability modeling, the combined analysis of reliability and performance, is introduced; some examples of applications where performance and reliability need to be modeled together are given; a strategy for modeling the effect of reliability on performance is outlined and metrics that help quantify this effect are discussed. Some mathematical models for performability are introduced and an example is used to illustrate how such models can be applied  相似文献   

18.
This paper describes a different approach to software reliability growth modeling which enables long-term predictions. Using relatively common assumptions, it is shown that the average value of the failure rate of the program, after a particular use-time, t, is bounded by N/(e·t), where N is the initial number of faults. This is conservative since it places a worst-case bound on the reliability rather than making a best estimate. The predictions might be relatively insensitive to assumption violations over the longer term. The theory offers the potential for making long-term software reliability growth predictions based solely on prior estimates of the number of residual faults. The predicted bound appears to agree with a wide range of industrial and experimental reliability data. Less pessimistic results can be obtained if additional assumptions are made about the failure rate distribution of faults  相似文献   

19.
Robust control techniques such as sliding mode control (SMC) require a dynamic model of the plant and bounds on modeling uncertainty to formulate control laws with guaranteed stability. Although techniques for modeling dynamic systems and estimating model parameters are well established, very few procedures exist for estimating uncertainty bounds. In the case of SMC design, a conservative global bound is usually chosen to ensure closed-loop stability over the entire operating space. The primary drawbacks of this conservative, "hard computing" approach are excessive control activity and reduced performance, particularly in regions of the operating space where the model is accurate. In this paper, a novel approach to estimating uncertainty bounds for dynamic systems is introduced. This "soft computing" approach uses a unique artificial neural network, the 2-Sigma network, to bound modeling uncertainty adaptively. This fusion of intelligent uncertainty bound estimation with traditional SMC results in a control algorithm that is both robust and adaptive. Simulations and experimental demonstrations conducted on a magnetic levitation system confirm these capabilities and reveal excellent tracking performance without excessive control activity.  相似文献   

20.
Real-time performance reliability prediction   总被引:5,自引:0,他引:5  
The purpose of this paper is to describe an approach to real-time reliability prediction, applicable to an individual product unit, operating under dynamic conditions. The concept of conditional reliability estimation is extended to real-time applications using time-series analysis techniques to bridge the gap between physical measurement and reliability prediction. The model is based on empirical measurements, self-generating, and applicable to online applications. This approach has been demonstrated to the prototype level. Physical performance is measured and forecast across time to estimate reliability. Time-series analysis is adapted to forecast performance. Exponential smoothing with a linear level and trend adaptation is applied. This procedure is computationally recursive and provides short-term, real-time performance forecasts which are linked directly to conditional reliability estimates. Failure clues must be present in the physical signals, and failure must be defined in terms of physical measures to accomplish this linkage. On-line, real-time applications of performance reliability prediction are useful in operation control as well as predictive maintenance  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号