首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A modification to the maximum likelihood algorithm was developed for classification of forest types in Sweden's part of the CORINE land cover mapping project. The new method, called the “calibrated maximum likelihood classification” involves an automated and iterative adjustment of prior weights until class frequency in the output corresponds to class frequency as calculated from objective (field-inventoried) estimates. This modification compensates for the maximum likelihood algorithm's tendency to over-represent dominant classes and under-represent less frequent ones. National forest inventory plot data measured from a five-year period are used to estimate relative frequency of class occurrence and to derive spectral signatures for each forest class. The classification method was implemented operationally within an automated production system which allowed rapid production of a country-wide forest type map from Landsat TM/ETM+ satellite data. The production system automated the retrieval and updating of forest inventory plots, a plot-to-image matching routine, illumination and haze correction of satellite imagery, and classification into forest classes using the calibrated maximum likelihood classification. This paper describes the details of the method and demonstrates the result of using an iterative adjustment of prior weights versus unadjusted prior weights. It shows that the calibrated maximum likelihood algorithm adjusts for the overclassification of classes that are well represented in the training data as well as for other classes, resulting in an output where class proportions are close to those as expected based on forest inventory data.  相似文献   

2.
We will review the principal methods of estimation of parameters in multivariate autoregressive moving average equations which have additional observable input terms in them and present some new methods of estimation as well. We begin with the conditions for the estimability of the parameters. In addition to the usual method of system representation, the canonical form I, we will present two new representations of the system equation, the so-called canonical forms II and III which are convenient for parameter estimation. We will mention, in some detail, the various methods of estimation like the various least-squares methods, the maximum likelihood methods, etc., and discuss them regarding their relative accuracy of the estimate and the corresponding computational complexity. We will introduce a new class of estimates, the so-called limited information estimates which utilizes the canonical forms II and III. The accuracy of these estimates is close to that of maximum likelihood, but their computation time is only a fraction of the computation time for the usual maximum likelihood estimates. We will present a few numerical examples to illustrate the various methods.  相似文献   

3.
A new mathematical model of short-term glucose regulation by insulin is proposed to exploit the oral glucose tolerance test (OGTT), which is commonly used for clinical diagnosis of glucose intolerance and diabetes. Contributions of endogenous and exogenous sources to measured plasma glucose concentrations have been separated by means of additional oral administration and constant intravenous infusion of glucose labeled with two different tracers. Twelve type 2 diabetic patients (7 males and 5 females) and 10 control subjects (5 males and 5 females) with normal glucose tolerance and matched body mass index (BMI) participated in this study. Blood samples for measurement of concentrations/activity of unlabeled and double-tracer glucose and insulin were collected every 15 min for 3 h following the oral glucose load. A minimal model combined with non-linear mixed-effects population parameter estimation has been devised to characterize group-average and between-patient variability of: (i) gastrointestinal glucose absorption; (ii) endogenous glucose production (EGP), and (iii) glucose disposal rate. Results indicate that insulin-independent glucose clearance does not vary significantly with gender or diabetic state and that the latter strongly affects, as expected, insulin-dependent clearance (insulin sensitivity). Inhibition of EGP, interpreted in terms of variations from basal of insulin concentrations, does not appear to be affected by diabetes but rather by BMI, i.e. by the degree of obesity. This study supports the utility of a minimal modelling approach, combined with population parameter estimation, to characterize glucose absorption, production and disposition during double-tracer OGTT experiments. The model provides a means for planning further experiments to validate the new hypothesis on the influence of individual factors, such as BMI and diabetes, on glucose appearance and disappearance, and for designing new simplified clinical tests.  相似文献   

4.
In this study, an adaptive FastSLAM (AFastSLAM) algorithm, which is obtained by estimating the time-varying noise statistics and improving FastSLAM algorithm, is proposed. This improvement was accomplished by using maximum likelihood estimation and expectation maximization criterion and a one-step smoothing algorithm in importance sampling. In addition, innovation covariance estimation (ICE) method was used to prevent loss of positive definiteness of the process and measurement noise covariance matrices. The proposed method was compared with FastSLAM by calculating the root mean square error (RMSE) using different particle numbers at varying initial process and measurement noise values. Simulation studies have shown that AFastSLAM provides much more accurate, consistent, and successful estimates than FastSLAM for both robot and landmark positions.  相似文献   

5.
阐述了极大似然估计算法用于无线传感器网络节点自定位的原理;阐述了最速下降算法求非线性方程组最优解的原理;提出在距离测量误差较大的情况下,使用最速下降算法优化极大似然估计算法所得的节点定位值,并通过模拟实验证实其可行性。实验结果表明,在无须多余通信代价的条件下,优化处理使定位精度得到很大提高,且算法收敛快,计算代价小,适用于无线传感器网络的节点自定位。  相似文献   

6.
A new two-parameter distribution with decreasing failure rate is introduced. Various properties of the introduced distribution are discussed. The EM algorithm is used to determine the maximum likelihood estimates and the asymptotic variances and covariance of these estimates are obtained. Simulation studies are performed in order to assess the accuracy of the approximation of the variances and covariance of the maximum likelihood estimates and investigate the convergence of the proposed EM scheme. Illustrative examples based on real data are also given.  相似文献   

7.
Interventricular septum thickness in end-diastole (IVSd) is one of the key parameters in cardiology. This paper presents a fast algorithm, suitable for pocket-sized ultrasound devices, for measurement of IVSd using 2D B-mode parasternal long axis images. The algorithm is based on a deformable model of the septum and the mitral valve. The model shape is estimated using an extended Kalman filter. A feasibility study using 32 unselected recordings is presented. The recordings originate from a database consisting of subjects from a normal healthy population. Five patients with suspected hypertrophy were included in the study. Reference B-mode measurements were made by two cardiologists. A paired t-test revealed a non-significant mean difference, compared to the B-mode reference, of (mean±SD) 0.14±1.36 mm (p=0.532). Pearson's correlation coefficient was 0.79 (p<0.001). The results are comparable to the variability between the two cardiologists, which was found to be 1.29±1.23 mm (p<0.001). The results indicate that the method has potential as a tool for rapid assessment of IVSd.  相似文献   

8.
The current computational power and some recently developed algorithms allow a new automatic spectral analysis method for randomly missing data. Accurate spectra and autocorrelation functions are computed from the estimated parameters of time series models, without user interaction. If only a few data are missing, the accuracy is almost the same as when all observations were available. For larger missing fractions, low-order time series models can still be estimated with a good accuracy if the total observation time is long enough. Autoregressive models are best estimated with the maximum likelihood method if data are missing. Maximum likelihood estimates of moving average and of autoregressive moving average models are not very useful with missing data. Those models are found most accurately if they are derived from the estimated parameters of an intermediate autoregressive model. With statistical criteria for the selection of model order and model type, a completely automatic and numerically reliable algorithm is developed that estimates the spectrum and the autocorrelation function in randomly missing data problems. The accuracy was better than what can be obtained with other methods, including the famous expectation–maximization (EM) algorithm.  相似文献   

9.
When analyzing survival data, the parameter estimates and consequently the relative risk estimates of a Cox model sometimes do not converge to finite values. This phenomenon is due to special conditions in a data set and is known as 'monotone likelihood'. Statistical software packages for Cox regression using the maximum likelihood method cannot appropriately deal with this problem. A new procedure to solve the problem has been proposed by G. Heinze, M. Schemper, A solution to the problem of monotone likelihood in Cox regression, Biometrics 57 (2001). It has been shown that unlike the standard maximum likelihood method, this method always leads to finite parameter estimates. We developed a SAS macro and an SPLUS library to make this method available from within one of these widely used statistical software packages. Our programs are also capable of performing interval estimation based on profile penalized log likelihood (PPL) and of plotting the PPL function as was suggested by G. Heinze, M. Schemper, A solution to the problem of monotone likelihood in Cox regression, Biometrics 57 (2001).  相似文献   

10.
When analyzing clinical data with binary outcomes, the parameter estimates and consequently the odds ratio estimates of a logistic model sometimes do not converge to finite values. This phenomenon is due to special conditions in a data set and known as 'separation'. Statistical software packages for logistic regression using the maximum likelihood method cannot appropriately deal with this problem. A new procedure to solve the problem has been proposed by Heinze and Schemper (Stat. Med. 21 (2002) pp. 2409-3419). It has been shown that unlike the standard maximum likelihood method, this method always leads to finite parameter estimates. We developed a SAS macro and an SPLUS library to make this method available from within one of these widely used statistical software packages. Our programs are also capable of performing interval estimation based on profile penalized log likelihood (PPL) and of plotting the PPL function as was suggested by Heinze and Schemper (Stat. Med. 21 (2002) pp. 2409-3419).  相似文献   

11.
By construction, model predictive control (MPC) relies heavily on predictive capabilities. Good control simultaneously requires predictions that provide consistent, strong filtering of sensor noise, as well as fast adaptation for disturbances. For example, controllers seeking to regulate the blood glucose levels in persons with Type 1 Diabetes should filter noise in the continuous glucose monitor (CGM) readings, while also adapting instantly to meals that trigger an extended upsurge in those same readings. One way to do this is to switch between multiple models with distinct dynamics. When the data suggest that there is a disturbance then the relevant model is given more influence on the predictions. When there is no evidence of the disturbance the non-disturbance model is given precedence. To reduce the effect of sensor noise we include prior information about the likely timing of the meal disturbances. Specifically, we model the system as making discrete transitions to new disturbances, allowing us to include the prior information as the prior probability of those transitions. Since each transition engenders a new disturbance case, we present a method to combine the cases that minimizes error and computational load. Here we develop a set of prior probabilities for meals that encode knowledge of the time of day, the timing of the last meal, sleep announcement, and meal announcement. We use this to detect and estimate current or past meals as well as anticipating future meals. Additionally, since this application can have asymmetric actuation and costs, violating the certainty equivalence principle, we also provide estimates of the prediction uncertainty. This method reduces 2 h prediction error by 45% relative to an algorithm without meal detection and 18% relative to one with meal detection. For 3 h prediction these improvements jump to 66% and 30% respectively. This algorithm improves the accuracy of prediction uncertainty estimates.  相似文献   

12.
For identifying errors-in-variables models, the time domain maximum likelihood (TML) method and the sample maximum likelihood (SML) method are two approaches. Both methods give optimal estimation accuracy but under different assumptions. In the TML method, an important assumption is that the noise-free input signal is modelled as a stationary process with rational spectrum. For SML, the noise-free input needs to be periodic. It is interesting to know which of these assumptions contain more information to boost the estimation performance. In this paper, the estimation accuracy of the two methods is analyzed statistically for both errors-in-variables (EIV) and output error models (OEM). Numerical comparisons between these two estimates are also done under different signal-to-noise ratios (SNRs). The results suggest that TML and SML have similar estimation accuracy at moderate or high SNR for EIV. For OEM identification, these two methods have the same accuracy at any SNR.  相似文献   

13.
A new Newton-based approach is proposed for finding the global maximum of a nonlinear function subject to various inequality constraints. This method can be applied to nonparametric maximum likelihood estimation problems to attribute tumor lethality in long-term carcinogenicity studies. This method is substantially faster and easier to implement than the Complex Method used in Ahn et al. (2000). This approach is very useful especially when there exist a large number of parameters of interest to be estimated and many nonlinear inequality constraints. A Monte Carlo simulation study is conducted to evaluate the computational efficiency and accuracy of the estimates obtained from the new approach. The advantages of using the Newton-based approach are illustrated with a real data set.  相似文献   

14.
A new missing data algorithm ARFIL gives good results in spectral estimation. The log likelihood of a multivariate Gaussian random variable can always be written as a sum of conditional log likelihoods. For a complete set of autoregressive AR(p) data the best predictor in the likelihood requires only p previous observations. If observations are missing, the best AR predictor in the likelihood will in general include all previous observations. Using only those observations that fall within a finite time interval will approximate this likelihood. The resulting non-linear estimation algorithm requires no user provided starting values. In various simulations, the spectral accuracy of robust maximum likelihood methods was much better than the accuracy of other spectral estimates for randomly missing data.  相似文献   

15.
The lognormal model can be fitted to survival data using a stable linear algorithm. When tested on 800 sets of mathematically generated data, this method proved more stable and efficient than the iterative method of maximum likelihood, which requires initial estimates of model parameters and failed to fit a substantial fraction of data sets. Though maximum likelihood yielded more consistent estimates of proportion cured, mean, and standard deviation of log(survival time), the linear normal algorithm may nevertheless prove useful for these purposes: (i) computing initial estimates of model parameters for the maximum likelihood method; (ii) fitting data sets that cannot be fit by this method; and (iii) deriving the lognormal model directly from cumulative mortality.  相似文献   

16.
To increase the accuracy of predicting net primary productivity (NPP), in this study, Carnegie–Ames–Stanford Approach (CASA) model was modified by developing new methods to estimate absorbed photosynthetically active radiation or fraction of photosynthetically active radiation (FPAR) and water stress coefficient (WSC). In the modified model, FPAR was derived based on its non-linear relationship with leaf area index. Moreover, WSC was estimated using leaf water potential from soil moisture instead of a traditional evapotranspiration-based method. This study was conducted in Baiyun District area of Guangzhou, China, using Gaofen-1 (GF-1), Landsat 7, and Moderate Resolution Imaging Spectroradiometer (MODIS) satellite images. The predictions from the original and three modified CASA models and MODIS NPP product MOD17A3 were compared with field observations. The results showed that all the CASA-based models led to similar spatial distributions of forest aboveground NPP estimates. Overall, the estimates increased with elevation because the valley bottoms were dominated by developed or urbanized areas whereas the hillslopes and hilltops were largely vegetated. Based on root mean square error (RMSE) and relative RMSE between the observed and predicted values, the CASA model that integrated the modifications of both FPAR and WSC increased the estimation accuracy of NPP by 8.1% over the original one. The increase in accuracy was mainly contributed by the modification of FPAR. This suggested that the modification of FPAR provided greater potential than that of WSC for improving the predictions of CASA model. Compared to the CASA models, MOD17A3 had lower accuracy of aboveground NPP estimates. This study also showed that the fine spatial resolution GF-1 image provided a new source of data used to estimate NPP of forest ecosystems.  相似文献   

17.
隐Markov模型参数估计的一种新方法   总被引:2,自引:0,他引:2  
本文提出一种隐Markov模型参数估计的新方法.该方法直接以模型作识别器时的识别 率最高(或误识率最低)作为估计准则.由该准则导出的算法的性能明显优于最大似然估计 器.文中给出了该算法的一种实现形式. 实验表明,该方法的模型识别率比用最大似然方法求出的模型识别率提高5%左右.  相似文献   

18.

We examine the utility of linear mixture modelling in the sub-pixel analysis of Landsat Enhanced Thematic Mapper (ETM) imagery to estimate the three key land cover components in an urban/suburban setting: impervious surface, managed/unmanaged lawn and tree cover. The relative effectiveness of two different endmember sets was also compared. The interior endmember set consisted of the median pixel value of the training pixels of each land cover and the exterior endmember set was the extreme pixel value. As a means of accuracy assessment, the resulting land cover estimates were compared with independent estimates obtained from the visual interpretation of digital orthophotography and classified IKONOS imagery. Impervious surface estimates from the Landsat ETM showed a high degree of similarity (RMS error (RMSE) within approximately ±10 to 15%) to that obtained using high spatial resolution digital orthophotography and IKONOS imagery. The partition of the vegetation component into tree vs grass cover was more problematic due to the greater spectral similarity between these land cover types with RMSE of approximately ±12 to 22%. The interior endmember set appeared to provide better differentiation between grass and urban tree cover than the exterior endmember set. The ability to separate the grass vs tree components in urban vegetation is of major importance to the study of the urban/suburban ecosystems as well as watershed assessment.  相似文献   

19.
Pixel-based and object-oriented processing of Chinese HJ-1-A satellite imagery (resolution 30 m) acquired on 23 July 2009 were utilized for classification of a study area in Budapest, Hungary. The pixel-based method (maximum likelihood classifier for pixel-level method (MLCPL)) and two object-oriented methods (maximum likelihood classifier for object-level method (MLCOL) and a hybrid method combining image segmentation with the use of a maximum likelihood classifier at the pixel level (MLCPL)) were compared. An extension of the watershed segmentation method was used in this article. After experimenting, we chose an optimum segmentation scale. Classification results showed that the hybrid method outperformed MLCOL, with an overall accuracy of 90.53%, compared with the overall accuracy of 77.53% for MLCOL. Jeffries–Matusita distance analysis revealed that the hybrid method could maintain spectral separability between different classes, which explained the high classification accuracy in mixed-cover types compared with MLCOL. The classification result of the hybrid model is preferred over MLCPL in geographical or landscape ecological research for its accordance with patches in landscape ecology, and for continuity of results. The hybrid of image segmentation and pixel-based classification provides a new way to classify land-cover types, especially mixed land-cover types, using medium-resolution images on a regional, national, or global basis.  相似文献   

20.
积雪面积比例(Fractional Snow Cover, FSC)是定量描述单位像元内积雪覆盖面积(Snow Cover Area, SCA)与像元空间范围的比值,可为区域气候模拟、水文模型等提供积雪分布的定量信息。MODIS FSC产品是根据经验模型计算得到,并没有考虑地形、植被和地表温度等环境因素的影响,在青藏高原的验证精度低。针对此问题,考虑青藏高原地区环境因素(地形、植被、地表温度)对FSC制备的影响,基于多元自适应回归模型(Multivariate Adaptive Regression Splines, MARS)和线性回归模型分别建立FSC制备的非参数回归模型和经验回归模型。用Landsat 8地表反射率的数据和SNOMAP算法制备FSC的参考数据集。选取一部分参考数据集作为模型的训练数据集,另一部分作为模型的检验数据集。研究结果表明:MARS方法估计FSC的精度明显高于线性回归模型和原有的MODIS FSC制备方法。MARS的总体R、RMSE、MAE分别为0.791、0.103、0.058。在线性回归模型中精度最高的总体R、RMSE、MAE分别为0.647、0.128、0.072。MODIS 原有FSC制图方法的总体R、RMSE、MAE分别为0.595、0.221、0.170。考虑了环境信息的MARS方法更加适用于青藏高原地区FSC制备。本研究为制备青藏高原地区更高精度的FSC数据提供了新思路。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号