共查询到20条相似文献,搜索用时 15 毫秒
1.
An algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of the MLE (maximum likelihood estimate) through the EM (expectation and maximization) algorithm, and it is derived for both postmortem and non-postmortem time-to-failure data. The MLEs of the nonpostmortem data are obtained for mixed-Weibull distributions with up to 14 parameters in a five-subpopulation mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore the algorithm should start at several initial guesses of the parameters set. It is shown that the EM algorithm is very efficient. On the average for two-Weibull mixtures with a sample size of 200, the CPU time (on a VAX 8650) is 0.13 s/iteration. The number of iterations depends on the characteristics of the mixture. The number of iterations is small if the subpopulations in the mixture are well separated. Generally, the algorithm is not sensitive to the initial guesses of the parameters 相似文献
2.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1963,9(3):182-191
In studies of sequential detection of radar signals, the parameter of primary interest is the length of the sequential test, denoted byn . Since this test length is a random variable, moments and/or probability distribution functions ofn are desirable. A procedure is described in this communication for obtaining exact probability distribution functionsP(n) and exact average values ofn ,E(n) , when the input to the sequential processor is discrete radar data (radar data in quantized form). This procedure is based upon the representation of the sequential test as a Markov process. The results are quite general in that they apply to multilevel quantization of the data. However, the procedure appears especially attractive when the number of levels is small as is usually the case when dealing with discrete radar data. The procedure for determining exact distribution functions and average values ofn presented herein is compared with the Wald-Girshick approach for obtainingP(n) andE(n) , and the superiority of the former approach in computational convenience is indicated. 相似文献
3.
4.
Bayly P.V. KenKnight B.H. Rogers J.M. Hillsley R.E. Ideker R.E. Smith W.M. 《IEEE transactions on bio-medical engineering》1998,45(5):563-571
An automated method to estimate vector fields of propagation velocity from observed epicardial extracellular potentials is introduced. The method relies on fitting polynomial surfaces T(x,y) to the space-time (x,y,t) coordinates of activity, Both speed and direction of propagation are computed from the gradient of the local polynomial surface. The components of velocity, which are total derivatives, are expressed in terms of the partial derivatives which comprise the gradient of T. The method was validated on two-dimensional (2-D) simulations of propagation and then applied to cardiac mapping data. Conduction velocity was estimated at multiple epicardial locations during sinus rhythm, pacing, and ventricular fibrillation (VF) in pigs. Data were obtained via a 528-channel mapping system from 23×22 and 24×21 arrays of unipolar electrodes sutured to the right ventricular epicardium. Velocity estimates are displayed as vector fields and are used to characterize propagation qualitatively and quantitatively during both simple and complex rhythms 相似文献
5.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1987,33(3):367-372
Let(X,Y), (X_{l}, Y_{l}), (X_{2}, Y_{2}), cdots be independent identically distributed pairs of random variables, and letm(x)=E(Y|X = x) be the regression curve ofY onX . The estimation of zeros and extrema of the regression curve via stochastic approximation methods is considered. Consistency results of some sequential procedures are presented and termination rules are defined providing fixed width confidence intervals for the parameters to be estimated. 相似文献
6.
Brown R.H. Schneider S.C. Mulligan M.G. 《Industrial Electronics, IEEE Transactions on》1992,39(1):11-19
Algorithms for constructing velocity approximations from discrete position versus time data are investigated. The study is limited to algorithms suitable to provide velocity information in discrete-time feedback control systems such as microprocessor-based systems with a discrete position encoder. Velocity estimators based on lines per period, reciprocal-time, Taylor series expansion, backward difference expansions, and least-square curve fits are presented. Based on computer simulations, comparisons of relative accuracies of the different algorithms are made. The least-squares velocity estimators filtered the effect of imperfect measurements best, whereas the Taylor series expansions and backward difference equation estimators respond better to velocity transients 相似文献
7.
Laser altimetry measurements from aircraft and spacecraft 总被引:6,自引:0,他引:6
Bufton J.L. 《Proceedings of the IEEE. Institute of Electrical and Electronics Engineers》1989,77(3):463-477
8.
Galantowicz J.F. Entekhabi D. Njoku E.G. 《Geoscience and Remote Sensing, IEEE Transactions on》1999,37(4):1860-1870
Sequential data assimilation (Kalman filter optimal estimation) techniques are applied to the problem of retrieving near-surface soil moisture and temperature state from periodic terrestrial radiobrightness observations that update soil heat and moisture diffusion models. The retrieval procedure uses a time-explicit numerical model to continuously propagate the soil state profile, its error of estimation, and its interdepth covariances through time. The model's coupled soil moisture and heat fluxes are constrained by micrometeorology boundary conditions drawn from observations or atmospheric modeling. When radiometer data are available, the Kalman filter updates the state profile estimate by weighing the propagated state, error, and covariance estimates against an a priori estimate of radiometric measurement error. The Kalman filter compares predicted and observed radiobrightnesses directly, so no inverse algorithm relating brightness to physical parameters is required. The authors demonstrate Kalman filter model effectiveness using field observations and a simulation study. An observed 1 m soil state profile is recovered over an eight-day period from daily L-band observations following an intentionally poor initial state estimate. In a four-month simulation study, they gauge the longer term behavior of the soil state retrieval and Kalman gain through multiple rain events, soil dry-downs, and updates from radiobrightnesses 相似文献
9.
10.
We derive variance and bias expressions of direction-of-arrival (DOA) estimates for MIN-NORM and FINE. These expressions, for arbitrarily configured arrays, are valid and accurate over wide ranges of SNR and number of snapshots. Using these expressions we illustrate that both MIN-NORM and FINE have smaller estimate bias and further FINE has comparable estimate variance compared to MUSIC 相似文献
11.
Solanki K. Jacobsen N. Madhow U. Manjunath B.S. Chandrasekaran S. 《IEEE transactions on image processing》2004,13(12):1627-1639
12.
D. R. Halverson 《Circuits, Systems, and Signal Processing》1995,14(4):465-472
In this paper the dual topics of robust signal detection and robust estimation of a random variable are considered, where the data may be both dependent and nonstationary. We note that classical saddlepoint techniques for robustness do not readily apply in the dependent and/or nonstationary situation, and thus our results have application in a larger domain than what was feasible heretofore. In addition, our methods make possible the quantitative measurement of robustness and admit essentially arbitrary perturbations in an underlying joint statistical distribution away from the nominal. In particular, our methods show that the presence of dependency can result in a reduction of the robustness of the linear detector by approximately 50% and that appropriate censoring can improve this situation. We also show that, somewhat surprisingly, a weak amount of censoring can actually reduce robustness rather than increase it, even with dependent data that is almost independent. This calls into question the common practice, inspired by classical saddlepoint results for independent data, of employing censoring in cases where residual dependency is conceded. When applied to estimation, our work shows that for nominally Gaussian data, the conditional expectation estimator is optimal not only in terms of performance but also robustness (under appropriate performance measures), thus reinforcing the appeal of this estimator. On the other hand, for other performance measures, we also note that the conditional expectation estimator can be completely unrobust, regardless of whether the data is nominally Gaussian or not. Finally, our results establish a bound on estimator robustness.This research was supported by the Air Force Office of Scientific Research under Grant AFOSR-91-0267. 相似文献
13.
This paper describes a method for combining multiple, dense range images to create surface reconstructions of height functions. Height functions are a special class of three-dimensional (3-D) surfaces, where one 3-D coordinate is a function of the other two. They are relevant for application domains such as terrain modeling or two-and-half dimensional surface reconstruction. Dense range maps are produced by either a range measuring device combined with a scanning mechanism or a triangulation scheme, such as active or passive stereo. The proposed method follows from a statistical formulation that characterizes the optimal surface estimate as the one that maximizes the posterior probability conditional on the input data and prior information about the application domain. Because the domain of the reconstruction is a two-dimensional (2-D) scalar function, the optimal surface can be expressed as an image, and the variational form of that optimization produces a 2-D partial differential equation (PDE). The PDE consists of two parts: a first-order data term and a second-order smoothing term. Thus optimal surface reconstruction is formulated as the solution to a second-order, nonlinear, PDE on an image, which is related to the family of PDE-based image processing algorithms in the literature. This paper presents the theory for reconstruction and some particular aspects of the numerical implementation. It also analyzes results on both synthetic and real data sets, which show a 75%-95% reduction of the RMS sensor error. 相似文献
14.
15.
We compare certain reduced data records for scattering of 5-mm electromagnetic waves by two sets of Styrofoam spheres of different diameter, moving randomly within a slab-region Styrofoam container. For the small spheres we summarize raw measurements for the forward scattered coherent phase, for the average intensities, and for the variances and covariance of phase-quadrature components of the instantaneous field; analogous results for the large spheres were given previously [1]. The emphasis is on the relative behavior of the two sets of reduced data records obtained by using scattering theory to eliminate sphere size, etc., from the original data. In particular, we compare appropriate, reduced data with statistical mechanics approximations for the "hard sphere gas" in order to delineate differences, and to indicate that the analytical procedures developed for the "dynamical gas model" may be applied to isolate the analogous functions for naturally occurring distributions. 相似文献
16.
Improved variations relating the Ziv-Lempel and Welch-typealgorithms for sequential data compression
Yokoo H. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1992,38(1):73-81
Several data compression algorithms relating existing important source coding algorithms, including Ziv-Lempel codes, Rissanen's Context, and Welch's LZW method, are presented. First, an intermediate algorithm between the two Ziv-Lempel methods for universal data compression is proposed, which has the same asymptotic optimality as the well-known method based on the incremental parsing. The proposed algorithm is then compared with the context gathering algorithm. Context, in terms of gathering direction and gathering frequency. It is shown that while the proposed algorithm and Context have the same gathering frequency, they have opposite directions of context gathering. Practical variations are also considered. By combining the proposed algorithm with Welch's device, two practical data compression methods are obtained. They, as well as Welch's LZW method, start with a small table of symbol strings and build the table during compression and decompression. In practical methods, higher compression efficiency can be gained by accelerating the growth of the table 相似文献
17.
The potential for data compression in using fractal interpolation functions (FIFs) is realized by the construction of a set of multirate filters. The filter tap weights are determined by optimizing the energy contents of a preselected set of frequency bands. This filter bank implementation of the FIF is successfully used to compress data simulated in a tracking environment. 相似文献
18.
Adaptive algorithms with nonlinear data and error functions 总被引:1,自引:0,他引:1
The tools of nonlinear system theory are used to examine several common nonlinear variants of the LMS algorithm and derive a persistence of excitation criterion for local exponential stability. The condition is tight when the inputs are periodic, and a generic counterexample is demonstrated which gives (local) instability for a large class of such nonlinear versions of LMS, specifically, those which utilize a nonlinear data function. The presence of a nonlinear error function is found to be relatively benign in that it does not affect the stability of the error system. Rather, it defines the cost function the algorithm tends to minimize. Specific examples include the dead zone modification, the cubed data nonlinearity, the cubed error nonlinearity, the signed regressor algorithm, and a single-layer version of the backpropagation algorithm 相似文献
19.
The accuracy of a migration image of ground-penetrating radar (GPR) depends strongly on the accuracy of permittivity distribution determined from multioffset data. This paper proposes a migration velocity analysis method using a genetic algorithm (GA). The objective function is defined as the summation of normalized zero-delay cross correlation of all common-image point gathers. Under the assumptions that the media are blockwise and that the permittivity of each block can be expressed as a polynomial with limited terms, all coefficients of the permittivity function of each block, which maximize the objective function, are determined by migration velocity analysis method with GA. Prestack migration is performed by a reverse-time migration method based on Maxwell's equations solved by the finite-difference time-domain method with a perfectly matched layer absorbing boundary conditions. The migration velocity analysis method is applied to synthetic common-transmitter datasets to test the method. Then, the velocity analysis and prestack migration method are applied to field data. From the distribution of dielectric constant obtained from the field data, water content is derived, and the depth of a water aquifer is deduced from the water content distribution and a migration stack profile. 相似文献
20.
对java平台从多媒体数据流实时传输的运用方面进行了介绍。对网络多媒体中的RTP、RSVP、RTSP和IPV6等数据流实时传输的关键协议从实时传输意义方面进行了分析。 相似文献