首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An important problem facing a manufacturer is the determination of the amount of time to burn-in items (in order to eliminate early failures) and the age at which to replace items (to avoid failures due to wearout). This problem becomes difficult to solve if the time-to-failure distribution of an item is unknown and must be estimated from test and operational data. This paper describes a method of statistical data analysis which is readily applied to the solution of this decision problem under a realistic but general loss (or gain) function. The method is a multiparameter Bayesian analysis which requires multiple integration of the (multivariate) posterior of the parameters of the time-to-failure distribution to obtain the expected loss (or gain) resulting from a particular choice of burn-in time and item replacement age. This integration is performed by a Monte Carlo Procedure using importance sampling. An example demonstrates the flexibility of this method of analysis. The data are a mixture of ``point' and truncated data, which often create difficulties when using conventional methods of decision analysis. In addition, since the method permits up to ten parameters for the family of time-to-failure distributions, a ``bathtub' hazard rate function is used to generate the data for the example. The results are presented in the form of Bayesian confidence intervals for the true hazard rate function and a presentation of the expected loss as a function of burn-in time and age at replacement.  相似文献   

2.
Dual-Vt design technique has proven to be extremely effective in reducing subthreshold leakage in both active and standby mode of operation of a circuit in submicrometer technologies. However, aggressive scaling of technology results in different leakage components (subthreshold, gate and junction tunneling) to become significant portion of total power dissipation in CMOS circuits. High-Vt devices are expected to have high junction tunneling current (due to stronger halo doping) compared to low-Vt devices, which in the worst case can increase the total leakage in dual-Vt design. Moreover, process parameter variations (and in turn Vt variations) are expected to be significantly high in sub-50-nm technology regime, which can severely affect the yield. In this paper, we propose a device aware simultaneous sizing and dual-Vt design methodology that considers each component of leakage and the impact of process variation (on both delay and leakage power) to minimize the total leakage while ensuring a target yield. Our results show that conventional dual-Vt design can overestimate leakage savings by 36% while incurring 17% average yield loss in 50-nm predictive technology. The proposed scheme results in 10%-20% extra leakage power savings compared to conventional dual-Vt design, while ensuring target yield. This paper also shows that nonscalability of the present way of realizing high-Vt devices results in negligible power savings beyond 25-nm technology. Hence, different dual-Vt process options, such as metal gate work function engineering, are required to realize high-performance and low-leakage dual-Vt designs in future technologies.  相似文献   

3.
This paper deals with the management of multimode sensors such as multifunction radars. We consider the problems of multitarget radar scheduling formulated as multivariate partially observed Markov decision process (POMDPs). The aim is to compute the scheduling policy to determine which target to choose and how long to continue with this choice so as to minimize a cost function. We give sufficient conditions on the cost function, dynamics of the Markov chain target and observation probabilities so that the optimal scheduling policy has a threshold structure with respect to the multivariate TP2 ordering. This implies that the optimal parameterized policy can be estimated efficiently. We then present stochastic approximation algorithms for estimating the best multilinear threshold policy.  相似文献   

4.
In many engineering applications, specially in communication engineering, one encounters a bandpass non-Gaussian random process, with a slowly varying envelope. Among the available models for non-Gaussian random processes, spherically invariant random processes (SIRPs) play an important role. These processes are of interest mainly due to the fact that they allow one to relax the assumption of Gaussianity, while keeping many of its useful characteristics. In this paper, we have derived a simple and closed-form formula for the expected number of maxima of a SIRP envelope. Since Gaussian random processes are special cases of SIRPs, this formula holds for Gaussian random processes as well. In contrast with the available complicated expression for the expected number of maxima in the envelope of a Gaussian random process, our simple result holds for an arbitrary power spectrum. The key idea in deriving this result is the application of the characteristic function, rather than the probability density function, for calculating the expected level crossing rate of a random process.  相似文献   

5.
6.
A technique for synthesizing reliable systems using parallel redundancy at the subsystem level is described. The specification of a loss function allows the calculation of the average expected loss which can be used to determine the optimum amount of redundancy for a given application. This optimum is statistical and therefore applicable only when the mission is performed many times.  相似文献   

7.
A policy of periodic replacement with minimal repair at failure is considered for a multi-unit system which has a specific multivariate distribution. Under such a policy the system is replaced at multiples of some period T while minimal repair is performed for any intervening component failure. The cost of a minimal repair to the component is assumed to be a function of its age and the number of minimal repairs. A simple expression is derived for the expected minimal repair cost in an interval in terms of the cost function and the failure rate of the component. The necessary and sufficient conditions for the existence of an optimal replacement interval are found.  相似文献   

8.
The problem of sequentially scanning and predicting data arranged in a multidimensional array is considered. We introduce the notion of a scandictor, which is any scheme for the sequential scanning and prediction of such multidimensional data. The scandictability of any finite (probabilistic) data array is defined as the best achievable expected "scandiction" performance on that array. The scandictability of any (spatially) stationary random field on /spl Zopf//sup m/ is defined as the limit of its scandictability on finite "boxes" (subsets of /spl Zopf//sup m/), as their edges become large. The limit is shown to exist for any stationary field, and essentially be independent of the ratios between the box dimensions. Fundamental limitations on scandiction performance in both the probabilistic and the deterministic settings are characterized for the family of difference loss functions. We find that any stochastic process or random field that can be generated autoregressively with a maximum-entropy innovation process is optimally "scandicted" the way it was generated. These results are specialized for cases of particular interest. The scandictability of any stationary Gaussian field under the squared-error loss function is given a single-letter expression in terms of its spectral measure and is shown to be attained by the raster scan. For a family of binary Markov random fields (MRFs), the scandictability under the Hamming distortion measure is fully characterized.  相似文献   

9.
In the process of encoding and decoding, erasure codes over binary fields, which just need AND operations and XOR operations and therefore have a high computational efficiency, are widely used in various fields of information technology. A matrix decoding method is proposed in this paper. The method is a universal data reconstruction scheme for erasure codes over binary fields. Besides a pre-judgment that whether errors can be recovered, the method can rebuild sectors of loss data on a fault-tolerant storage system constructed by erasure codes for disk errors. Data reconstruction process of the new method has simple and clear steps, so it is beneficial for implementation of computer codes. And more, it can be applied to other non-binary fields easily, so it is expected that the method has an extensive application in the future.  相似文献   

10.
In analogy to the orthogonal functionals of the Brownian-motion process developed by Wiener, ltô, and others, a theory of the orthogonal functionals of the Poisson process is presented making use of the concept of multivariate orthogonal polynomials. Following a brief discussion of Charlier polynomials of a single variable, multivariate Charlier polynomials are introduced. An explicit representation as well as an orthogonality property are given. A multiple stochastic integral of a multivariate function with respect to the Poisson process, called the multiple Poisson-Wiener integral, is defined using the multivariate Charlier polynomials. A multiple Poisson-Wiener integral, which gives a polynomial functional of the Poisson process, is orthogonal to any other of different degree. Several explicit forms are given for the sake of application. It is shown that any nonlinear functional of the Poisson process with finite variance can be developed in terms of these orthogonal functionals, corresponding to the Cameron-Martin theorem in the case of the Brownian-motion process. Finally, some possible applications to nonlinear problems associated with the Poisson process are briefly discussed.  相似文献   

11.
With scaling of device dimensions, microscopic variations in number and location of dopant atoms in the channel region of the device induce increasingly limiting electrical deviations in device characteristics such as threshold voltage. These atomic-level intrinsic fluctuations cannot be eliminated by external control of the manufacturing process and are most pronounced in minimum-geometry transistors commonly used in area-constrained circuits such as SRAM cells. Consequently, a large number of cells in a memory are expected to be faulty due to process variations in sub-50-nm technologies. This paper analyzes SRAM cell failures under process variation and proposes new variation-aware cache architecture suitable for high performance applications. The proposed architecture adaptively resizes the cache to avoid faulty cells, thereby improving yield. This scheme is transparent to processor architecture and has negligible energy and area overhead. Experimental results on a 32 K direct map L1 cache show that the proposed architecture can achieve 93% yield compared to its original 33%. The Simplescalar simulation shows that designing the data and instruction cache using the proposed architecture results in 1.5% and 5.7% average CPU performance loss (over SPEC 2000 benchmarks), respectively, for the chips with maximum number of faulty cells which can be tolerated by our proposed scheme.  相似文献   

12.
An approximate evaluation is proposed for the individual mean waiting time, and loss probability for mixed delay and loss (nondelay) systems with renewal and Poisson inputs handled by servers with exponential service time. The approximation is based on the GI approximation previously proposed by H. Akimaru, et al. (1983, 1985), in which the mixed input process is regarded as renewal. The systems with mixed delay renewal and nondelay Poisson inputs, and ones with mixed nondelay renewal and delay Poisson inputs are analyzed. Approximate formulas for the mean waiting time and loss probability for the respective inputs are presented in simple closed form, and comparisons to simulations show good accuracy. The formulas are expected to be useful for analysis and optimum design of the mixed delay and loss systems  相似文献   

13.
Initially, the ability of personal computers to perform signal processing or multivariate analysis was severely limited by small memory address space and lack of scientific language support. Recently, however, this situation has changed, with large memory sizes common and with the availability of mainframe languages such as FORTRAN-77 to support complex and double-precision expressions. Today, personal computers can be applied to data collection, multivariate analysis, pattern classification, simulation of signal processing hardware, and other engineering applications. We discuss the conversion of mainframe data analysis software for personal computers, and the use of high-resolution personal computer graphics for data displays. The process is illustrated with the conversion of part of the IEEE signal processing library and of the ARTHUR81 multivariate analysis routines to run on a personal computer. Timing and accuracy results are given for two personal computers--the TI Professional and the IBM PC AT. The use of a personal computer to validate data, obtain measurement statistics, perform classification and cluster analysis, and perform modern spectral analysis is illustrated with run information and typical output displays.  相似文献   

14.
Consider a single node queueing system which can be modeled by a finite quasi-birth-death (QBD) process. We present a computational technique for spectral analyses (i.e., second-order statistics) of output, queue, and loss. The emphasis is placed on the performance evaluation of output power spectrum and input-output coherence function with respect to various input power spectral properties and system parameters. The coherence function is defined to measure the linear relationship between input and output processes. Through the evaluation of the coherence function, we identify a so-called nonlinear break frequency, ωb, under which the low-frequency traffic stay intact via a queueing system. Such a low-frequency I/O linearity plays an important role in characterizing the output process, which may form a partial input to other “downstream” queues of the network. In particular, the unchanged “upstream” low-frequency traffic characteristics are expected to have a significant impact on the “downstream” queues as well. Our numerical analysis examines the sensitivity of ωb to traffic characteristics and system parameters. The study further indicates that the link capacity requirement of traffic at a given buffer system is essentially characterized by its maximum input rate filtered at ω b  相似文献   

15.
Radar Scattering and Target Imaging Obtained Using Ramp-Response Techniques   总被引:1,自引:0,他引:1  
The response from targets illuminated by a transient plane wave the time dependence of which takes the form of a ramp function has been used to generate signatures for conductive and penetrable targets. Modified geometrical profile functions of various targets are used to generate their ramp response, which in turn is used to evaluate their scattered fields as a function of frequency. In early time, the ramp response is proportional to the physical cross-sectional area of the target (as a function of time or distance as the wave propagates over the target). Thus, the ramp response can be generated from the target's geometry. This is then Fourier transformed to obtain the spectrum of the ramp response. To obtain the spectrum of the impulse response, this result is simply multiplied by (jomega)2, where omega is the angular frequency. These simple steps can be readily performed by anyone with an electrical engineering background to obtain the backscattered fields to a reasonable approximation. These same fundamentals can also be used to approximate a target's image from the measured ramp response. This has been done for axial incidence of rotationally symmetric targets. It can be extended to non-rotationally symmetric targets using an iterative process.  相似文献   

16.
Performance of different estimators describing propagation of electroencephalogram (EEG) activity, namely: Granger causality, directed transfer function (DTF), direct DTF (dDTF), short-time DTF (SDTF), bivariate coherence, and partial directed coherence are compared by means of simulations and on the examples of experimental signals. In particular, the differences between pair-wise and multichannel estimates are studied. The results show unequivocally that in most cases, the pair-wise estimates are incorrect and a complete set of signals involved in a given process has to be used to obtain the correct pattern of EEG flows. Different performance of multivariate estimators of propagation depending on their normalization is discussed. Advantages of multivariate autoregressive model are pointed out.  相似文献   

17.
After analyzing the multivariate Cpm method(Chan et al.1991),this paper presents a spatial multivariate process capability index(PCI) method,which can solve a multivariate off-centered case and may provide references for assuring and improving process quality level while achieving an overall evaluation of process quality. Examples for calculating multivariate PCI are given and the experimental results show that the systematic method presented is effective and actual.  相似文献   

18.
Combines optimization and ergodic theory to characterize the optimum long-run average performance that can be asymptotically attained by nonanticipating sequential decisions. Let {Xt} be a stationary ergodic process, and suppose an action bt must be selected in a space ℬ with knowledge of the t-past (X0, ···, Xt-1) at the beginning of every period t⩾0. Action bt will incur a loss l(bt, Xt) at the end of period t when the random variable Xt is revealed. The author proves under mild integrability conditions that the optimum strategy is to select actions that minimize the conditional expected loss given the currently available information at each step. The minimum long-run average loss per decision can be approached arbitrarily closely by strategies that are finite-order Markov, and under certain continuity conditions, it is equal to the minimum expected loss given the infinite past. If the loss l(b, x) is bounded and continuous and if the space ℬ is compact, then the minimum can be asymptotically attained, even if the distribution of the process {Xt} is unknown a priori and must be learned from experience  相似文献   

19.
By removing infant mortalities, burn-in of semiconductor devices improves reliability. However, burn-in may affect the yield of semiconductor devices since defects grow during burn-in and some of them end up with yield loss. The amount of yield loss depends upon burn-in environments. Another burn-in effect is the yield gain. Since yield is a function of defect density, if some defects are detected and removed during burn-in, the yield of the post-burn-in process can be expected to increase. The amount of yield gain depends upon the number of defects removed during burn-in. In this paper we present yield loss and gain expressions and relate them with the reliability projection of semiconductor devices in order to determine burn-in time  相似文献   

20.
A spatial Green's function in 2-D for straight and infinite microstriplines has been shown to be accurate at frequencies that the dynamic effects cannot be neglected. It is reasonable, therefore, to expect that a similarly accurate spatial Green's function in 3-D can be constructed for finite and curved microstriplines. Based on the same image model of charges and currents as in 2-D, this paper constructs the 3-D Green's function. The Green's function is then applied, through Harrington's moment rnethod, to calculate the input impedance of a few microstriplines, viz., a matched microstripline, straight and hairpin open-ended stubs. The input impedance of a microstrip stub always has a small resistive component indicating the radiative loss. This resistive component agrees with that calculated from the Lewin's formula. Finally, as expected, the imaginary part of the input impedance agrees with that calculated from the TEM approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号