首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract

The sampling functions needed to reconstruct from quadrature distributions the density matrix elements in the displaced Fock-state basis are determined as scaled and shifted pattern functions fmn used to reconstruct the density matrix elements Q mn in the Fock-state basis. Having at hand the diagonal density matrix elements one can reconstruct any s-parametrized quasiprobability distribution via a simple weighted sum over these quantities. A smoothed Wigner function can be directly sampled from the measured quadrature distribution of the signal field. The corresponding sampling function is just a shifted and scaled version of f 00.  相似文献   

2.
The emerging technology of positron emission image reconstruction is introduced in this paper as a multicriteria optimization problem. We show how selected families of objective functions may be used to reconstruct positron emission images. We develop a novel neural network approach to positron emission imaging problems. We also studied the most frequently used image reconstruction methods, namely, maximum likelihood under the framework of single performance criterion optimization. Finally, we introduced some of the results obtained by various reconstruction algorithms using computer‐generated noisy projection data from a chest phantom and real positron emission tomography (PET) scanner data. Comparison of the reconstructed images indicated that the multicriteria optimization method gave the best in error, smoothness (suppression of noise), gray value resolution, and ghost‐free images. © 2001 John Wiley & Sons, Inc. Int J Imaging Syst Technol 11, 361–364, 2000  相似文献   

3.
We have developed an algorithm called fast maximum likelihood reconstruction (FMLR) that performs spectral deconvolution of 1D-2D NMR spectra for the purpose of accurate signal quantification. FMLR constructs the simplest time-domain model (e.g., the model with the fewest number of signals and parameters) whose frequency spectrum matches the visible regions of the spectrum obtained from identical Fourier processing of the acquired data. We describe the application of FMLR to quantitative metabolomics and demonstrate the accuracy of the method by analysis of complex, synthetic mixtures of metabolites and liver extracts. The algorithm demonstrates greater accuracy (0.5-5.0% error) than peak height analysis and peak integral analysis with greatly reduced operator intervention. FMLR has been implemented in a Java-based framework that is available for download on multiple platforms and is interoperable with popular NMR display and processing software. Two-dimensional (1)H-(13)C spectra of mixtures can be acquired with acquisition times of 15 min and analyzed by FMLR in the range of 2-5 min per spectrum to identify and quantify constituents present at concentrations of 0.2 mM or greater.  相似文献   

4.
The general problem of reconstructing a biological interaction network from temporal evolution data is tackled via an approach based on dynamical linear systems identification theory. A novel algorithm, based on linear matrix inequalities, is devised to infer the interaction network. This approach allows to directly taking into account, within the optimisation procedure, the a priori available knowledge of the biological system. The effectiveness of the proposed algorithm is statistically validated, by means of numerical tests, demonstrating how the a priori knowledge positively affects the reconstruction performance. A further validation is performed through an in silico biological experiment, exploiting the well-assessed cell-cycle model of fission yeast developed by Novak and Tyson.  相似文献   

5.
The paper describes procedures to compare the fatigue behaviour of two populations corresponding to batches that differ for a factor on the basis of fatigue data of two samples. Fatigue data are elaborated by means of the maximum likelihood method to obtain the SNP curves and their confidence intervals. The likelihood ratio test is used to identify the most adequate distribution and model. To analyse the influence of the factor, two integrated procedures are adopted: the first one is based on the LR test, the second is an empirical procedure, based on the comparison of the confidence intervals of the curves.  相似文献   

6.
In this study, we compare two radar target direction-of-arrival (DOA) estimation algorithms: the classical moving window (MW) estimator, implemented in many real radar systems and the approximate maximum likelihood (AML) estimator. The first technique exploits multiple detections in the same time-on-target and the second one exploits the fact that the radar antenna mechanical scanning impresses an amplitude modulation on the signals backscattered by the target. Performances of the two estimators are numerically investigated through Monte Carlo simulation and compared in terms of root-mean-square-error (RMSE), probability of detection and probability of target splitting, the latter being defined as the probability of detecting more than one target when instead only one is present in the cell under test. Numerical results show that the AML estimator generally outperforms the classical MW estimator, also in terms of robustness to target splitting.  相似文献   

7.
8.
We propose a numerical method to compute the survival (first-passage) probability density function in jump-diffusion models. This function is obtained by numerical approximation of the associated Fokker–Planck partial integro-differential equation, with suitable boundary conditions and delta initial condition. In order to obtain an accurate numerical solution, the singularity of the Dirac delta function is removed using a change of variables based on the fundamental solution of the pure diffusion model. This approach allows to transform the original problem to a regular problem, which is solved using a radial basis functions (RBFs) meshless collocation method. In particular the RBFs approximation is carried out in conjunction with a suitable change of variables, which allows to use radial basis functions with equally spaced centers and at the same time to obtain a sharp resolution of the gradients of the survival probability density function near the barrier. Numerical experiments are presented in which several different kinds of radial basis functions are employed. The results obtained reveal that the numerical method proposed is extremely accurate and fast, and performs significantly better than a conventional finite difference approach.  相似文献   

9.
In this paper, a new approach for the evaluation of the probability density function (pdf) of a random variable from the knowledge of its lower moments is presented. At first the classical moment problem (MP) is revisited, which gives the conditions such that the assigned sequence of sample moments represent really a sequence of moments of any distribution. Then an alternative approach is presented, termed as the kernel density maximum entropy (MaxEnt) method by the authors, which approximates the target pdf as a convex linear combination of kernel densities, transforming the original MP into a discrete MP, which is solved through a MaxEnt approach. In this way, simply solving a discrete MaxEnt problem, not requiring the evaluation of numerical integrals, an approximating pdf converging toward the MaxEnt pdf is obtained. The method is first demonstrated by approximating some known analytical pdfs (the chi‐square and the Gumbel pdfs) and then it is applied to some experimental engineering problems, namely for modelling the pdf of concrete strength, the circular frequency and the damping ratio of strong ground motions, the extreme wind speed in Messina's Strait region. All the developed numerical applications show the goodness and efficacy of the proposed procedure. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

10.
《TEST》1991,6(1):127-135
Summary We consider a normal model with known diagonal covariance matrix and a vector of means constrained to belong to a polyhedral cone. The standard estimatorsX (unrestricted MLE) andX * (restricted MLE) are compared for estimation of several components of the parameter simultaneously. We show thatX * is preferred toX under several conditions.  相似文献   

11.
12.
13.
Using the variational density matrix method, we obtain a temperature-dependent elementary excitation spectrum for two-dimensional liquid4He. For more precise results, we use a Jastrow-Feenberg-type trial wave function and include the contribution of elementary diagrams within the hypernetted chain approximation. The behavior of the excitation spectrum as a function of the temperature and density in two dimensions is similar to that of the bulk system, but has a smaller roton minimum. The roton minimum of the excitation spectrum decreases with increasing temperature and increases with increasing density at low densities but decreases at large densities. The results agree well with Monte Carlo calculations and are closer than pevious theories to experimental measurements of4He film adsorbed on substrates.  相似文献   

14.
The application of the principle of maximum likelihood to the analysis of fatigue test results, including run-outs, is described. The particular method used is that developed by Edwards, called ‘Support’. The paper describes the use of this method in determining means and standard deviations for test results, the determination of ‘best-fit’ S/N curves with their associated standard deviations and the determination of the significance of differences between groups of results, different S/N curves and the determination of ‘best’ common slopes and the intercepts of such curves. A computer program developed to perform the necessary calculations is outlined. Examples are given of the types of results produced by this analysis and of certain difficulties in the interpretation.  相似文献   

15.
16.
We present a new “no-background” procedure, based on the maximum likelihood method, for fitting the space-time size parameters of the particle production region in ultra-relativistic heavy ion collisions. This procedure uses an approximation to avoid the necessity of constructing a mixed-event background before fitting the data.  相似文献   

17.
The data acquired by magnetic resonance (MR) imaging system are inherently degraded by noise that has its origin in the thermal Brownian motion of electrons. Denoising can enhance the quality (by improving the SNR) of the acquired MR image, which is important for both visual analysis and other post processing operations. Recent works on maximum likelihood (ML) based denoising shows that ML methods are very effective in denoising MR images and has an edge over the other state‐of‐the‐art methods for MRI denoising. Among the ML based approaches, the Nonlocal maximum likelihood (NLML) method is commonly used. In the conventional NLML method, the samples for the ML estimation of the unknown true pixel are chosen in a nonlocal fashion based on the intensity similarity of the pixel neighborhoods. Euclidean distance is generally used to measure this similarity. It has been recently shown that computing similarity measure is more robust in discrete cosine transform (DCT) subspace, compared with Euclidean image subspace. Motivated by this observation, we integrated DCT into NLML to produce an improved MRI filtration process. Other than improving the SNR, the time complexity of the conventional NLML can also be significantly reduced through the proposed approach. On synthetic MR brain image, an average improvement of 5% in PSNR and 86%reduction in execution time is achieved with a search window size of 91 × 91 after incorporating the improvements in the existing NLML method. On an experimental kiwi fruit image an improvement of 10% in PSNR is achieved. We did experiments on both simulated and real data sets to validate and to demonstrate the effectiveness of the proposed method. © 2015 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 25, 256–264, 2015  相似文献   

18.
A computational procedure is presented for calculating the maximum likelihood sampled parameters for the gamma and beta distributions. The procedure employed is both computationally efficient and easy to use. In order to facilitate immediate application, a self-contained FORTRAN IV computer code has been included to carry out the necessary operations.  相似文献   

19.
XH Nguyen  SH Lee  HS Ko 《Applied optics》2012,51(24):5834-5844
Three-dimensional optical tomography techniques were developed to reconstruct three-dimensional objects using a set of two-dimensional projection images. Five basis functions, such as cubic B-spline, o-Moms, keys, and cosine functions and Gaussian basis functions, were used to calculate the weighting coefficients for a projection matrix. Two different forms of a multiplicative algebraic reconstruction technique were also used to solve inverse problems. The reconstruction algorithm was examined by using several phantoms, which included droplet behaviors and random distributions of particles in a volume. The three-dimensional volume comprised of particles was reconstructed from four projection angles, which were positioned at an offset angle of 45° between each other. Then, three-dimensional velocity fields were obtained from the reconstructed particle volume by three-dimensional cross correlation. The velocity field of the synthetic vortex flow was reconstructed to analyze the three-dimensional tomography algorithm.  相似文献   

20.
Two types of diagrammatic approaches for the design and simulation of nonlinear optical experiments (closed-time path loops based on the wave function and double-sided Feynman diagrams for the density matrix) are presented and compared. We give guidelines for the assignment of relevant pathways and provide rules for the interpretation of existing nonlinear experiments in carotenoids.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号