首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
2.
3.
In this paper, a new approach for the evaluation of the probability density function (pdf) of a random variable from the knowledge of its lower moments is presented. At first the classical moment problem (MP) is revisited, which gives the conditions such that the assigned sequence of sample moments represent really a sequence of moments of any distribution. Then an alternative approach is presented, termed as the kernel density maximum entropy (MaxEnt) method by the authors, which approximates the target pdf as a convex linear combination of kernel densities, transforming the original MP into a discrete MP, which is solved through a MaxEnt approach. In this way, simply solving a discrete MaxEnt problem, not requiring the evaluation of numerical integrals, an approximating pdf converging toward the MaxEnt pdf is obtained. The method is first demonstrated by approximating some known analytical pdfs (the chi‐square and the Gumbel pdfs) and then it is applied to some experimental engineering problems, namely for modelling the pdf of concrete strength, the circular frequency and the damping ratio of strong ground motions, the extreme wind speed in Messina's Strait region. All the developed numerical applications show the goodness and efficacy of the proposed procedure. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

4.
In this paper, we focus on the performance of adjustment rules for a machine that produces items in batches and that can experience errors at each setup operation performed before machining a batch. The adjustment rule is applied to compensate for the setup offset in order to bring back the process to target. In particular, we deal with the case in which no prior information about the distribution of the offset or about the within‐batch variability is available. Under such conditions, adjustment rules that can be applied are Grubbs' rules, the exponentially‐weighted moving average (EWMA) controller and the Markov chain Monte Carlo (MCMC) adjustment rule, based on a Bayesian sequential estimation of unknown parameters that uses MCMC simulation. The performance metric of the different adjustment rules is the sum of the quadratic off‐target costs over the set of batches machined. Given the number of batches and the batch size, different production scenarios (characterized by different values of the lot‐to‐lot and the within‐lot variability and of the mean offset over the set of batches) are considered. The MCMC adjustment rule is shown to have better performance in almost all of the cases examined. Furthermore, a closer study of the cases in which the MCMC policy is not the best adjustment rule motivates a modified version of this rule which outperforms alternative adjustment policies in all the scenarios considered. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

5.
The wireless sensing signal of a passive surface acoustic wave (SAW) resonator sensor is the response of the SAW resonator in a passive circuit to wireless radio frequency interrogation. The response is produced only in the case that the interrogation covers the operational frequency band of the resonator. The wireless response is transient and can only be detectable in a proximity after switching off the interrogation. Due to the fact that, while used as a sensor, the resonant frequency of the resonator is related to and varying with the measurand, the interrogation to a passive SAW resonator sensor has to trace and follow the correspondent variation of the frequency band of the device. The energy evaluation of the response is applied to detect the availability of the sensing response and is used as a feedback argument to roughly localize the operational frequency range of the sensor. A modified frequency estimation is employed to estimate the sensing characteristic frequency in the transient wireless sensing signal with a low signal-to-noise ratio. The estimation is used to further adjust the interrogation frequency to follow the frequency variation of the sensor until the response becomes optimal. The evaluation of signal energy along with the statistical quantity of frequency estimation gives a reference for the confidence of the estimated frequency.  相似文献   

6.
Borah DK  Voelz DG 《Applied optics》2007,46(23):6010-6018
The problem of estimating mechanical boresight and jitter performance of a laser pointing system in the presence of atmospheric turbulence is considered. A novel estimator based on maximizing an average probability density function (pdf) of the received signal is presented. The proposed estimator uses a Gaussian far-field mean irradiance profile, and the irradiance pdf is assumed to be lognormal. The estimates are obtained using a sequence of return signal values from the intended target. Alternatively, one can think of the estimates being made by a cooperative target using the received signal samples directly. The estimator does not require sample-to-sample atmospheric turbulence parameter information. The approach is evaluated using wave optics simulation for both weak and strong turbulence conditions. Our results show that very good boresight and jitter estimation performance can be obtained under the weak turbulence regime. We also propose a novel technique to include the effect of very low received intensity values that cannot be measured well by the receiving device. The proposed technique provides significant improvement over a conventional approach where such samples are simply ignored. Since our method is derived from the lognormal irradiance pdf, the performance under strong turbulence is degraded. However, the ideas can be extended with appropriate pdf models to obtain more accurate results under strong turbulence conditions.  相似文献   

7.
The problem of improving the quality of measurand reconstruction using integration techniques for implementation of various measurement functions is addressed. An integrated specialized structure for Kalman-filter-based reconstruction of the measurand is proposed. It is designed to act as an external coprocessor for a host processor. Although intended to improve the resolution of spectrometric measurements, it may be used in other applications where similar processing of measurement signals is required. The performance of the integrated specialized structure is compared to that of the general-purpose digital signal processor DSP56001  相似文献   

8.
This paper presents an innovative application of a new class of parallel interacting Markov chains Monte Carlo to solve the Bayesian history matching (BHM) problem. BHM consists of sampling a posterior distribution given by the Bayesian theorem. Markov chain Monte Carlo (MCMC) is well suited for sampling, in principle, any type of distribution; however the number of iteration required by the traditional single-chain MCMC can be prohibitive in BHM applications. Furthermore, history matching is typically a highly nonlinear inverse problem, which leads in very complex posterior distributions, characterized by many separated modes. Therefore, single chain can be trapped into a local mode. Parallel interacting chains is an interesting way to overcome this problem, as shown in this paper. In addition, we presented new approaches to define starting points for the parallel chains. For validation purposes, the proposed methodology is firstly applied in a simple but challenging cross section reservoir model with many modes in the posterior distribution. Afterwards, the application to a realistic case integrated to geostatistical modelling is also presented. The results showed that the combination of parallel interacting chain with the capabilities of distributed computing commonly available nowadays is very promising to solve the BHM problem.  相似文献   

9.
Fiber nonlinearities can degrade the performance of a wavelength-division multiplexing optical network. For high input power, a low chromatic dispersion coefficient, or low channel spacing, the most severe penalties are due to four-wave mixing (FWM). To compute the bit-error rate that is due to FWM noise, one must evaluate accurately the probability-density functions (pdf) of both the space and the mark states. An accurate evaluation of the pdf of the FWM noise in the space state is given, for the first time to the authors' knowledge, by use of Monte Carlo simulations. Additionally, it is shown that the pdf in the mark state is not symmetric as had been assumed in previous studies. Diagrams are presented that permit estimation of the pdf, given the number of channels in the system. The accuracy of the previous models is also investigated, and finally the results of this study are used to estimate the power limits of a wavelength-division multiplexing system.  相似文献   

10.

A Bayesian nonmetric successive categories multidimensional scaling (MDS) method is proposed. The proposed method can be seen as a Bayesian alternative to the maximum likelihood multidimensional successive scaling method proposed by Takane (1981), or as a nonmetric extension of Bayesian metric MDS by Oh and Raftery (2001). The model has a graded-response type measurement model part and a latent metric MDS part. All the parameters are jointly estimated using a Markov chain Monte Carlo (MCMC) estimation technique. Moreover, WinBUGS/OpenBUGS code for the proposed methodology is also given to aid applied researchers. The proposed method is illustrated through the analysis of empirical two-mode three-way similarity data.

  相似文献   

11.
In this work, the problem of an efficient representation and its exploitation to the approximate determination of a compactly supported, continuous probability density function (pdf) from a finite number of its moments is addressed. The representation used is a finite superposition of kernel density functions. This representation preserves positivity and can approximate any continuous pdf as closely as it is required. The classical theory of the Hausdorff moment problem is reviewed in order to make clear how the theoretical results as, e.g. the moment bounds, can be exploited in the numerical procedure. Various difficulties arising from the well-known ill-posedness of the numerical moment problem have been identified and solved. The kernel coefficients of the pdf expansion are calculated by solving a constrained, non-negative least-square problem. The consistency, numerical convergence and robustness of the solution algorithm have been illustrated by numerical examples with unimodal and bimodal pdfs. Although this paper is restricted to univariate, compactly supported pdfs, the method can be extended to general pdfs either univariate or multivariate, with finite or infinite support.  相似文献   

12.
The functional statistical framework is considered to address the problem of least-squares estimation of the realizations of fractal and long-range dependence Gaussian random signals, from the observation of the corresponding response surface. The statistical methodology applied is based on the functional regression model. The geometrical properties of the separable Hilbert spaces of functions, where the response surface and the signal of interest lie, are considered for removing the ill-posed nature of the estimation problem, due to the non-locality of the integro-pseudodifferential operators involved. Specifically, the local and asymptotic properties of the spectra of fractal and long-range dependence random fields in the Linnik-type, Dagum-type and auxiliary families are analyzed to derive a stable solution to the associated functional estimation problem. Their pseudodifferential representation and Reproducing Kernel Hilbert Space (RKHS) characterization are also derived for describing the geometrical properties of the spaces where the functional random variables involved in the corresponding regression problem can be found.  相似文献   

13.
Application of Bessel beam for Doppler velocity estimation   总被引:1,自引:0,他引:1  
Limited-diffraction beams have a large depth of field and could be applied to medical imaging, tissue characterization, and nondestructive evaluation of materials. This paper reports the application of limited-diffraction beams, specifically, the Bessel beam, to Doppler velocity estimation. The Bessel beam has the advantage that velocity estimation is less subject to the depth of moving objects and the Doppler spectrum has distinct shoulders that increase the accuracy of velocity (both magnitude and Doppler angle) estimation in noisy environments. The shoulders of the Doppler spectrum might also help in solving the inverse problem, e.g., estimation of the velocity distribution in vessels  相似文献   

14.
This paper addresses issues that arise in measurement system analysis of a binary measurement system if the measurand is a hybrid between a dichotomy and a continuum. A case study is presented, which illustrates methods to assess the error rates of binary measurements with such a hybrid measurand. The case study concerns pass/fail inspection of laptop screens for scratches, where the measurand is the presence or absence of scratches. If a scratch is present, the measurand corresponds with a continuum of scratch sizes, but if no scratch is present, the measurand corresponds with a point. It is argued that if the measurand is a hybrid, a standard logistic regression model is not suitable to estimate the characteristic curve relating the reject probability with the measurand. Several alternative specifications for the characteristic curve are introduced and compared. We conclude that many of the methods currently used for assessment of a binary measurement system with a hybrid measurand are unsuited. This is a remarkable conclusion, given the frequent occurrence in industry of leak tests, inspections for defects, and other binary measurement systems with a hybrid measurand. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
Mixed models take the dependency between observations based on the same person into account by introducing one or more random effects. After introducing the mixed model framework, it is explained, by taking the Rasch model as a generic example, how item response models can be conceptualized as generalized linear and nonlinear mixed models. Common estimation methods for generalized linear and nonlinear models are discussed. In a simulation study, the performance of four estimation methods is assessed for the Rasch model under different conditions regarding the number of items and persons, and the degree of interindividual differences. The estimation methods included in the study are: an approximation of the integral over the random effect by means of Gaussian quadrature; direct maximization with a sixth-order Laplace approximation to the integrand; a linearized approximation of the nonlinear model employing PQL2; and finally a Bayesian MCMC method. It is concluded that the estimation methods perform almost equally well, except for a slightly worse recovery of the variance parameter for PQL2 and MCMC.  相似文献   

16.
In model-based process optimization one uses a mathematical model to optimize a certain criterion, for example the product yield of a chemical process. Models often contain parameters that have to be estimated from data. Typically, a point estimate (e.g. the least squares estimate) is used to fix the model for the optimization stage. However, parameter estimates are uncertain due to incomplete and noisy data. In this article, it is shown how parameter uncertainty can be taken into account in process optimization. To quantify the uncertainty, Markov Chain Monte Carlo (MCMC) sampling, an emerging standard approach in Bayesian estimation, is used. In the Bayesian approach, the solution to the parameter estimation problem is given as a distribution, and the optimization criteria are functions of that distribution. The formulation and implementation of the optimization is studied, and numerical examples are used to show that parameter uncertainty can have a large effect in optimization results.  相似文献   

17.
Cox  M. G.  Harris  P. M. 《Measurement Techniques》2004,47(1):102-111
Some of the technical aspects of guidelines for key comparison data evaluation prepared by BIPM Director's Advisory Group on Uncertainties are considered. These guidelines relate to key comparisons based on the measurement of a travelling standard having good short-term stability and stability during transport, in cases where the institutes' measurements are realised independently. They include two procedures for forming a key comparison reference value (KCRV), and the associated uncertainty, and the consequent degrees of equivalence (including the associated uncertainties), in accordance with the Mutual Recognition Arrangement. The basis of the procedures is (a) the representation of the information provided by the participating institutes as probability density functions (pdf), and (b) the estimator (model) used as the KCRV. The calculation of the KCRV and the associated uncertainty and the degrees of equivalence is then undertaken in accordance with the law of propagation of uncertainty, as described in the Guide to the Expression of Uncertainty in Measurement (GUM), or the propagation of distributions, a generalisation of the law of propagation of uncertainty, covered in a supplemental guide to the GUM. Attention is paid to the choice of model, relating it to the conditions that apply to the key comparison. The first procedure is intended for cases where for each institute a Gaussian distribution is assigned to the measurand of which the institute's measurement is an estimate. The weighted mean is used as the model in this case. A consistency test is included to determine whether the model is consistent with the data. If the test is satisfied, the weighted mean is accepted as the KCRV. The second procedure is used in circumstances where (a) not all the pdf's assigned are Gaussian or (b) where the first procedure had previously been applied, the consistency test was not satisfied and there was no opportunity to correct all institutes' data regarded as discrepant. The model in this case is chosen to be a more robust estimator such as the median or another estimator considered appropriate for the particular comparison.  相似文献   

18.
余学锋 《计量学报》2000,21(4):314-318
本给出了测量不确定度的Bayes表征方法,该方法是通过对待测对象的测量结果验后分布的分析,获得不确定度表征所需参数值。与古典统计方法相比,Bayes方法具有估计精度高,不确定度的表征更为客观的特点。  相似文献   

19.
According to the Guide to the Expression of Uncertainty in Measurement (GUM), a result of measurement consists of a measured value together with its associated standard uncertainty. The measured value and the standard uncertainty are interpreted as the expected value and the standard deviation of a state-of-knowledge probability distribution attributed to the measurand. We discuss the term metrological compatibility introduced by the International Vocabulary of Metrology, third edition (VIM3) for lack of significant differences between two or more results of measurement for the same measurand. Sometimes a combined result of measurement from multiple evaluations of the same measurand is needed. We propose an approach for determining a combined result which is metrologically compatible with the contributing results.  相似文献   

20.
A hybrid Subset Simulation approach is proposed for reliability estimation for general dynamical systems subject to stochastic excitation. This new stochastic simulation approach combines the advantages of the two previously proposed Subset Simulation methods, Subset Simulation with Markov Chain Monte Carlo (MCMC) algorithm and Subset Simulation with splitting. The new method employs the MCMC algorithm before reaching an intermediate failure level and splitting after reaching the level to exploit the causality of dynamical systems. The statistical properties of the failure probability estimators are derived. Two examples are presented to demonstrate the effectiveness of the new approach and to compare with the previous two Subset Simulation methods. The results show that the new method is robust to the choice of proposal distribution for the MCMC algorithm and to the intermediate failure events selected for Subset Simulation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号