首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
A generalized formulation of several circuit design problems, such as manufacturing yield optimization, circuit performance variability minimization, deterministic and statistical minimax design, Income Index maximization, Taguchi approach, etc., is developed. Several other “intermediate” problems can be defined in a sense similar to the one used in Zadeh's fuzzy set theory. A specific problem is identified by the selection of a generalized membership function ω(·) of the acceptability region, and a sequence of the values of the “smoothing” parameter β. Generalized gradient formulas are developed, and various possible algorithmic implementations discussed. As a result, trade-offs between different design strategies can be investigated by circuit designers, within one coherent methodology.  相似文献   

2.
Technology shrinking and sensitization have led to more and more transient faults in embedded systems. Transient faults are intermittent and non-predictable faults caused by external events, such as energetic particles striking the circuits. These faults do not cause permanent damages, but may affect the running applications. One way to ensure the correct execution of these embedded applications is to keep debugging and testing even after shipping of the systems, complemented with recovery/restart options. In this context, the executable assertions that have been widely used in the development process for design validation can be deployed again in the final product. In this way, the application will use the assertion to monitor itself under the actual execution and will not allow erroneous out-of-the-specification behavior to manifest themselves. This kind of software-level fault tolerance may represent a viable solution to the problem of developing commercial off-the-shelf embedded systems with dependability requirements. But software-level fault tolerance comes at a computational cost, which may affect time-constrained applications. Thus, the executable assertions shall be introduced at the best possible points in the application code, in order to satisfy timing constraints, and to maximize the error detection efficiency. We present an approach for optimization of executable assertion placement in time-constrained embedded applications for the detection of transient faults. In this work, assertions have different characteristics such as tightness, i.e., error coverage, and performance degradation. Taking into account these properties, we have developed an optimization methodology, which identifies candidate locations for assertions and selects a set of optimal assertions with the highest tightness at the lowest performance degradation. The set of selected assertions is guaranteed to respect the real-time deadlines of the embedded application. Experimental results have shown the effectiveness of the proposed approach, which provides the designer with a flexible infrastructure for the analysis of time-constrained embedded applications and transient-fault-oriented executable assertions.  相似文献   

3.
New electron beam probing techniques have been developed which are based on either the subtraction of two stroboscopic phases at two test vectors during the recording of a single image, or the probing of two phases at a single physical point of interest. These are, respectively, the image-based dual phase image, and the point-probing-based dual phase measurement. These new techniques are perfect for IC-internal failure analysis as they enable the detection of “opens” and “stuck-at” errors and can be linked to fault simulation software and a fault dictionary.  相似文献   

4.
In cases where device numbers are limited, large statistical studies to verify reliability are impractical. Instead, an approach incorporating a solid base of modelling, simulation, and material science into a standard reliability methodology makes more sense and leads to a science-based reliability methodology. The basic reliability method is (a) design, model and fabricate, (b) test structures and devices, (c) identify failure modes and mechanisms, (d) develop predictive reliability models (accelerated aging), and (e) develop qualification methods. At various points in these steps technical data is required on MEMS material properties (residual stress, fracture strength, fatigue, etc.), MEMS surface characterization (stiction, friction, adhesion, role of coatings, etc.) or MEMS modelling and simulation (finite element, analysis, uncertainty analysis, etc.). This methodology is discussed as it relates to reliability testing of a micro-mirror array consisting of 144-piston mirrors. In this case, 140 mirrors were cycled full stroke (1.5 μm) 26 billion times with no failure. Using our technical science base, fatigue of the springs was eliminated as a mechanism of concern. Eliminating this wear-out mechanism allowed use of the exponential statistical model to predict lower bound confidence levels for failure rate in a “no-fail” condition.  相似文献   

5.
MPARM: Exploring the Multi-Processor SoC Design Space with SystemC   总被引:4,自引:0,他引:4  
Technology is making the integration of a large number of processors on the same silicon die technically feasible. These multi-processor systems-on-chip (MP-SoC) can provide a high degree of flexibility and represent the most efficient architectural solution for supporting multimedia applications, characterized by the request for highly parallel computation. As a consequence, tools for the simulation of these systems are needed for the design stage, with the distinctive requirement of simulation speed, accuracy and capability to support design space exploration. We developed a complete simulation platform for a MP-SoC called MP-ARM, based on SystemC as modelling and simulation environment, and including models for processors, the AMBA bus compliant communication architecture, memory models and support for parallel programming. A fully operating linux version for embedded systems has been ported on this platform, and a cross-toolchain has been developed as well. Our MP simulation environment turns out to be a powerful tool for the MP-SOC design stage. As an example thereof, we use our tool to evaluate the impact on system performance of architectural parameters and of bus arbitration policies, showing that the effectiveness of a particular system configuration strongly depends on the application domain and the generated traffic profile.Luca Benini received the B.S. degree (summa cum laude) in electrical engineering from the University of Bologna, Italy, in 1991, and the M.S. and Ph.D. degrees in electrical engineering from Stanford University in 1994 and 1997, respectively. He is an associate professor in the department of electronics and computer science in the University of Bologna. He also holds visiting researcher positions at Stanford University and the Hewlett-Packard Laboratories, Palo Alto, CA.Dr. Benini’s research interests are in all aspects of computer-aided design of digital circuits, with special emphasis on low-power applications, and in the design of portable systems. He is co-author of the book: Dynamic Power management, Design Techniques and CAD tools, Kluwer 1998.Dr. Benini is a member of the technical program committee for several technical conferences, including the Design Automation Conference, the International Symposium on Low Power Design and the International symposium on Hardware-Software Codesign.Davide Bertozzi received the B.S. degree in electrical engineering from the University of Bologna, Bologna, Italy, in 1999.He is currently pursuing the Ph.D. degree at the same University and is expected to graduate in 2003. His research interests concern the development of SoC co-simulation platforms, exploration of SoC communication architectures and low power system design.Alessandro Bogliolo received the Laurea degree in electrical engineering and the Ph.D. degree in electrical engineering and computer science from the University of Bologna, Bologna, Italy, in 1992 and 1998.In 1995 and 1996 he was a Visiting Scholar at the Computer Systems Laboratory (CSL), Stanford University, Stanford, CA.From 1999 to 2002 he was an Assistant Professor at the Department of Engineering (DI) of the University of Ferrara, Ferrara, Italy. Since 2002 he’s been with the Information Science and Technology Institute (STI) of the University of Urbino, Urbino, Italy, as Associate Professor. His research interests are mainly in the area of digital integrated circuits and systems, with emphasis on low power and signal integrity.Francesco Menichelli was born in Rome in 1976. He received the Electronic Engineering degree in 2001 at the University of Rome “La Sapienza”. From 2002 he is a Ph.D. student in Electronic Engineering at “La Sapienza” University of Rome.His scientific interests focus on low power digital design, and in particular in level tecniques for low power consumption, power modeling and simulation of digital systems.Mauro Olivieri received a Master degree in electronic engineering “cum laude” in 1991 and a Ph.D. degree in electronic and computer engineering in 1994 from the University of Genoa, Italy, where he also worked as an assistant professor. In 1998 he joined the University of Rome “La Sapienza”, where he is currently associate professor in electronics. His research interests are digital system-on-chips and microprocessor core design. Prof. Olivieri supervises several research projects supported by private and public fundings in the field of VLSI system design.  相似文献   

6.
Today's quality paradigms deal with a very broad array of concepts and approaches. This paper describes these in detail. No longer is quality the “policeman”, but now a needed “team player”. It is important that all concerned understand and apply these new tools.  相似文献   

7.
The present practice in space electronic activity is to use margins for electronic design called “derating” and to predict the components end of life behaviour so that the designs can fit with the worst cases of parts performances. This European Space Agency “Part Standard Specification-01-301” (Derating and End of Life performance prediction document) is being up-dated using consistent rationales based on recognized acceleration laws and Ea, and physical models. Data reviews are made to confirm the calculations of ageing drifts. Optimisation of the present rules leads to a more rigorous design and reliability tool.  相似文献   

8.
Very few articles dealing with software quality assurance have been published over the past few years; these have been primarily from the military and medical industries. It is interesting to note that many of the topics included or suggested in these articles are presently not in use and the concepts are “relatively unknown”. Effective statistical techniques are basically lacking unfortunately. The “real key” is to understand that software quality assurance is “10 years behind the hardware” as stated by an SWQA department manager in a large military products (DOD) facility. Thus the intent of these offerings is to share some ideas in a very complex and challenging arena; this concept, this methodology has not been able to keep up with the rapid advances required of it and imposed by the primary systems.  相似文献   

9.
In this paper a general approach to optimal process control is developed, which leads to relatively simple objective functions for optimization. These two features “generality” and “simplicity” are the foundation of the development of powerful approximation techniques which allow simple determination of approximately optimal solutions for a great number of cases.  相似文献   

10.
Orestis  Fotini-Niovi   《Ad hoc Networks》2008,6(2):245-259
Concurrent with the rapid expansion of wireless networks is an increasing interest in providing Quality-of-Service (QoS) support to them. As a consequence, a number of medium access control protocols has been proposed which aims at providing service differentiation at the distributed wireless medium access layer. However, most of them provide only average performance assurances. We argue that average performance guarantees will be inadequate for a wide range of emerging multimedia applications and “per-flow” service assurances must be provided instead. Based on m-ary tree algorithms, we propose an adaptive and distributed medium access algorithm for single-cell ad hoc networks to provide “per-flow” service assurances to flows whose QoS requirement can be expressed as a delay requirement. Both analytical and simulation experiments are conducted to assess the performance of the proposed scheme.  相似文献   

11.
For more than a decade, the focus of GaAs reliability testing has been on high temperature lifetesting. Several failure mechanisms are highly accelerated by temperature, so this methodology has produced data that is easy to analyze and straightforward to predict applicable lifetimes – albeit very long lifetimes. To the contrary, GaAs devices actually fail for quite different failure mechanisms during typical use. This work will provide some data on what failure mechanisms actually occur, how they become more apparent with “volume manufacturing”, and how they may be related to traditional quality metrics.  相似文献   

12.
A study of the electric field and temperature dependence of the breakdown and quasi-breakdown phenomena is presented for 3.5 nm ultra-thin SiO2 gate oxides. Using à methodology based on the competing mecanism concept between breakdown and quasi-breakdown processes, quasi-breakdown activation energy as well as acceleration factor are determined. It is demonstrated on these 3.5nm gate oxides that the quasi-breakdown temperature activation energy is almost temperature independent on the contrary to the breakdown one. Moreover, it has been shown that the temperature dependence of the breakdown acceleration factor and the electric field dependence of the temperature activation energy cannot be explained by a pure “E” and “1/E” models, but can be interpreted by the “E” model if at least two types of molecular defect states are considered.  相似文献   

13.
Software tools offer powerful support in the areas of engineering specification, design, implementation, and test. The tools are at their most potent when they actively promote agility and responsiveness throughout a product life cycle and leave a legacy of knowledge to inform future product development. Model-based design facilitates these benefits by considering a simulation of the system under development as an executable specification. This executable specification may be regarded as “one truth” across engineering teams with the simulation being abstracted or enhanced as appropriate. First-principle, data-driven, and physical modeling further strengthens model-based design, by allowing the agility and responsiveness afforded by model-based design to be relevant for both algorithmic and nonalgorithmic design considerations. Indeed, models are a powerful means to offer support for in-service operation, diagnostics of unintended operations and assessment and upgrades of control systems and/or system architectures during the entire life-cycle of a product. This paper will consider the benefits of physical modeling and model-based design through an example of a high acceleration linear motor. The motor type, power electronic-drive switching strategy, and power-electronic drive architecture will be considered. Finally, the use of parallel computing within the context of this application will be discussed, in particular as an effective means to generate results for a large number of operational scenarios in a time-effective manner.   相似文献   

14.
The problem of verifying the integrity, and quantifying the reliability of semi-custom IC product families is addressed, and one possible approach to efficient “generic qualification” is outlined. The approach used relies on qualification via a set of chips specifically designed for this purpose, rather than via one or more “product” chips. Key features integrated into these test vehicles are that the entire “cell library” is represented, as well as that the “worst case” features are maximized.  相似文献   

15.
Bluetooth is a promising technology for personal/local area wireless communications. A Bluetooth scatternet is composed of simple overlapping piconets, each with a low number of devices sharing the same radio channel. A scatternet may have different topological configurations, depending on the number of composing piconets, the role of the devices involved and the configuration of the links. This paper discusses the scatternet formation issue by analyzing topological characteristics of the scatternet formed. A matrix-based representation of the network topology is used to define metrics that are applied to evaluate the key cost parameters and the scatternet performance. Numerical examples are presented and discussed, highlighting the impact of metric selection on scatternet performance. Then, a distributed algorithm for scatternet topology optimization is introduced, that supports the formation of a “locally optimal” scatternet based on a selected metric. Numerical results obtained by adopting this distributed approach to “optimize” the network topology are shown to be close to the global optimum.  相似文献   

16.
This paper presents a model for determining the optimum stock in a three level depots system. The method for determining stock levels uses “incremental performance per price algorithm” (IPPA) and the “new Lawler-Bell algorithm” (NLB). The practical calculation is carried out on a computer, the results being presented in the form of a list of necessary spare units from the first to the third level.  相似文献   

17.
In this article we present a new method for designing self-testing checkers for t-UED and BUED codes. The main idea of this method is to map words of the considered code to words of a code of the same type in which the value of t or the number of check bits is reduced and repeating this with the obtained words until a parity code is obtained, or to translate the code words into words of a code for which such a mapping is possible. First we consider Borden codes for t = 2 k – 1, Bose, and Bose-Lin codes. The mapping operation is realized by averaging weights and check symbol values of the code words. The resulting checkers have a simple and regular structure. This structure is independent on the set of code words that is provided by the circuit under check. The checkers are very well suited for use as embedded checkers since they are self-testing with respect to single stuck-at faults under very weak assumptions. All three checker types can be tested with 2 or 3 code words. We also propose a novel approach to design checkers for Blaum codes that require much less code word tests than existing solutions.  相似文献   

18.
The “giga-chip era” has begun. A new challenging approach to ULSI reliability is now greatly needed in response to the “paradigm shift” now being brought about by simple scaling limitations, increased process complexity, and diversified ULSI application to advanced multimedia and personal digital assistant (PDA) systems. A good example of this shift is the new movement from simple failure analysis by sampling the output of a manufacturing line to the “building-in-reliability” approach. To pursue this technique, greater importance will be attached to a deeper physical understanding of the significant relationships between the input variables and product reliability (including frequent use of Computer Aided Design, CAD and Design Automation, DA), and to total concurrent engineering from research labs to production sites. Furthermore, fast, new ULSI testing methods and new yield-enhancing redundancy techniques that reduce costs will be increasingly needed to achieve high reliability for ULSIs with 109 devices on a single chip. Only with these approaches can we pave the way for giga-scale integration (GSI) in the 21st century.  相似文献   

19.
Routing of packets in networks requires that a path be selected either dynamically while the packets are being forwarded, or statically (in advance) as in source routing from a source node to a destination. Quality of service (QoS) driven routing has been proposed using a protocol called the “Cognitive Packet Network” (CPN) which dynamically selects paths through a store and forward packet network so as to provide best effort QoS to route peer-to-peer connections. CPN operates very much as an adhoc protocol within a wired setting, and uses smart packets to select routes based on QoS requirements. We extend the path discovery process in CPN to include a genetic algorithm which can help discover new paths that may not have been discovered by smart packets. We describe how possible routes can “evolve” from prior knowledge, and then be selected based on “fitness” with respect to QoS. We detail the design of the algorithm and of its implementation, and report on resulting QoS measurements.  相似文献   

20.
This research effort has developed a mathematical model for bathtub shaped hazards (failure rates) for operating systems with uncensored data. The model will be used to predict the reliability of systems with such hazards. Early in the life-time of a system, there may be a relatively large number of failures due to initial weaknesses or defects in materials and manufacturing processes. This period is called the “infant mortality” period. During the middle period of an operating system fewer failures occur and are caused when the environmental stresses exceed the design strength of the system. It is difficult to predict the environmental stress amplitudes or the system strengths as deterministic functions of time, thus the middle-life failures are often called “random failures.” As the system ages, it deterioates and more failures occur. This region of failure is called the “wearout” period. Graphing these failure rates simultaneously will result in a bathtub shaped curve. The model developed for this bathtub pattern of failure takes into account all three failure regions simultaneously. The model has been validated for accuracy by using Halley's mortality table and is used to predict the reliability with both least squares and maximum likelihood estimators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号