首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An asynchronous stochastic approximation based (frequentist) approach is proposed for mapping using noisy mobile sensors under two different scenarios: (1) perfectly known sensor locations and (2) uncertain sensor locations. The frequentist methodology has linear complexity in the map components, is immune to the data association problem and is provably consistent. The frequentist methodology, in conjunction with a Bayesian estimator, is applied to the Simultaneous Localization and Mapping (SLAM) problem of Robotics. Several large maps are estimated using the hybrid Bayesian/Frequentist scheme and results show that the technique is robust to the computational and performance issues inherent in the purely Bayesian approaches to the problem.  相似文献   

2.
It is often the case that an outcome of interest is observed for a restricted non-randomly selected sample of the population. In such a situation, standard statistical analysis yields biased results. This issue can be addressed using sample selection models which are based on the estimation of two regressions: a binary selection equation determining whether a particular statistical unit will be available in the outcome equation. Classic sample selection models assume a priori that continuous regressors have a pre-specified linear or non-linear relationship to the outcome, which can lead to erroneous conclusions. In the case of continuous response, methods in which covariate effects are modeled flexibly have been previously proposed, the most recent being based on a Bayesian Markov chain Monte Carlo approach. A frequentist counterpart which has the advantage of being computationally fast is introduced. The proposed algorithm is based on the penalized likelihood estimation framework. The construction of confidence intervals is also discussed. The empirical properties of the existing and proposed methods are studied through a simulation study. The approaches are finally illustrated by analyzing data from the RAND Health Insurance Experiment on annual health expenditures.  相似文献   

3.
A hybrid Bayesian/ frequentist approach is presented for the Simultaneous Localization and Mapping Problem (SLAM). A frequentist approach is proposed for mapping a dense environment when the robotic pose is known and then extended to the case when the pose is uncertain. The SLAM problem is then solved in two steps: 1) the robot is localized with respect to a sparse set of landmarks in the map using a Bayes filter and a belief on the robot pose is formed, and 2) this belief on the robot pose is used to map the rest of the map using the frequentist estimator. The frequentist part of the hybrid methodology is shown to have complexity linear (constant time complexity under the assumption of bounded noise) in the map components, is robust to the data association problem and is provably consistent. The complexity of the Bayesian part is kept under control owing to the sparseness of the features, which also improves the robustness of the technique to the issue of data association. The hybrid method is tested on standard datasets on the RADISH repository.  相似文献   

4.
Recent advances in the technology of multiunit recordings make it possible to test Hebb's hypothesis that neurons do not function in isolation but are organized in assemblies. This has created the need for statistical approaches to detecting the presence of spatiotemporal patterns of more than two neurons in neuron spike train data. We mention three possible measures for the presence of higher-order patterns of neural activation--coefficients of log-linear models, connected cumulants, and redundancies--and present arguments in favor of the coefficients of log-linear models. We present test statistics for detecting the presence of higher-order interactions in spike train data by parameterizing these interactions in terms of coefficients of log-linear models. We also present a Bayesian approach for inferring the existence or absence of interactions and estimating their strength. The two methods, the frequentist and the Bayesian one, are shown to be consistent in the sense that interactions that are detected by either method also tend to be detected by the other. A heuristic for the analysis of temporal patterns is also proposed. Finally, a Bayesian test is presented that establishes stochastic differences between recorded segments of data. The methods are applied to experimental data and synthetic data drawn from our statistical models. Our experimental data are drawn from multiunit recordings in the prefrontal cortex of behaving monkeys, the somatosensory cortex of anesthetized rats, and multiunit recordings in the visual cortex of behaving monkeys.  相似文献   

5.
Because of the high cost and time constraints for clinical trials, researchers often need to determine the smallest sample size that provides accurate inferences for a parameter of interest. Although most experimenters have employed frequentist sample-size determination methods, the Bayesian paradigm offers a wide variety of sample-size determination methodologies. Bayesian sample-size determination methods are becoming increasingly more popular in clinical trials because of their flexibility and easy interpretation inferences. Recently, Bayesian approaches have been used to determine the sample size of a single Poisson rate parameter in a clinical trial setting. In this paper, we extend these results to the comparison of two Poisson rates and develop methods for sample-size determination for hypothesis testing in a Bayesian context. We have created functions in R to determine the parameters for the conjugate gamma prior and calculate the sample size for the average length criterion and average power methods. We also provide two examples that implement our sample-size determination methods using clinical data.  相似文献   

6.
Bayesian hierarchical modelling techniques have some advantages over classic methods for the analysis of cluster-randomized trial. Bayesian approach is also becoming more popular to deal with measurement error and misclassification problems. We propose a Bayesian approach to analyze a cluster-randomized trial with adjusting for misclassification in a binary covariate in the random effect logistic model when a gold standard is not available. This Markov chain Monte Carlo (MCMC) approach uses two imperfect measures of a dichotomous exposure under the assumptions of conditional independence and non-differential misclassification. Both simulated numerical example and real clinical example are given to illustrate the proposed approach. The Bayesian approach has great potential to be used in misclassification problem in generalized linear mixed model (GLMM) since it allow us to fit complex models and identify all the parameters. Our results suggest that Bayesian approach for analyzing cluster-randomized trial and adjusting for misclassification in GLMM is flexible and powerful.  相似文献   

7.
Exact two-sided guaranteed-coverage tolerance intervals for the exponential distribution which satisfy the traditional “equal-tailedness” condition are derived in the failure-censoring case. The available empirical information is provided by the first r ordered observations in a sample of size n. A Bayesian approach for the construction of equal-tailed tolerance intervals is also proposed. The degree of accuracy of a given tolerance interval is quantified. Moreover, the number of failures needed to achieve the desired accuracy level is predetermined. The Bayesian perspective is shown to be superior to the frequentist viewpoint in terms of accuracy. Extensions to other statistical models are presented, including the Weibull distribution with unknown scale parameter. An alternative tolerance interval which coincides with an outer confidence interval for an equal-tailed quantile interval is also examined. Several important computational issues are discussed. Three censored data sets are considered to illustrate the results developed.  相似文献   

8.
Equating is an important step in the process of collecting, analyzing, and reporting test scores in any program of assessment. Methods of equating utilize functions to transform scores on two or more versions of a test, so that they can be compared and used interchangeably. In common practice, traditional methods of equating use either parametric or semi-parametric models where, apart from the test scores themselves, no additional information is used to estimate the equating transformation function. A flexible Bayesian nonparametric model for test equating which allows the use of covariates in the estimation of the score distribution functions that lead to the equating transformation is proposed. A major feature of this approach is that the complete shape of the scores distribution may change as a function of the covariates. As a consequence, the form of the equating transformation can change according to covariate values. Applications of the proposed model to real and simulated data are discussed and compared to other current methods of equating.  相似文献   

9.
Studies of ocular disease and analyses of time to disease onset are complicated by the correlation expected between the two eyes from a single patient. We overcome these statistical modeling challenges through a nonparametric Bayesian frailty model. While this model suggests itself as a natural one for such complex data structures, model fitting routines become overwhelmingly complicated and computationally intensive given the nonparametric form assumed for the frailty distribution and baseline hazard function. We consider empirical Bayesian methods to alleviate these difficulties through a routine that iterates between frequentist, data-driven estimation of the cumulative baseline hazard and Markov chain Monte Carlo estimation of the frailty and regression coefficients. We show both in theory and through simulation that this approach yields consistent estimators of the parameters of interest. We then apply the method to the short-wave automated perimetry (SWAP) data set to study risk factors of glaucomatous visual field deficits.  相似文献   

10.
Understanding households' behaviour in residential relocation timing is of great importance in the field of transport engineering and economics. This research aims to develop a residential relocation model by considering the potential dynamic impacts of other households' decisions and variables, including economic and demographic attributes, housing features, intra-household decision-making structures, travel mode choice, and other life-course attributes. A multivariate parametric survival model with both fixed and time-varying covariates is developed. To the best of the authors' knowledge, this study is the first paper in the literature of residential relocation timing to propose the use of a Bayesian model in contrast to the widely used classic frequentist approach and have conducted a discussion on its advantages. An emerging residential relocation dataset collected for two cities in Australia and the USA (Sydney and Chicago cities) has been used, which covers residence, vehicle ownership, occupation, education, economic and demographic attributes of respondents. A comprehensive comparison between the results of two cities and a comparison between two Bayesian and frequentist approaches are made. This study confirms the impact of life-course variables, intra-household decision-making behaviours, and sociodemographic attributes on home mobility. According to the model outputs, the accelerating or decelerating impact of explanatory variables on the relocation timing has been almost the same in the two cities. The Bayesian model was confirmed to have some advantages over the frequentist model, including being straightforward to interpret, availability of making inferences on the results, and ease of handling complex models, and optimisation convergence complexities.  相似文献   

11.
Bayes factor (BF) is often used to measure evidence against the null hypothesis in Bayesian hypothesis testing. In the analysis of genome-wide association (GWA) studies, extreme BF values support the associations detected based on significant p-values. Results from recent GWA studies are presented, which show that existing BFs may not be consistent withp-values when a robust test is used due to using different genetic models in the BF and p-value approaches and this may result in misleading conclusions. Two hybrid BFs, which combine the advantages of both the frequentist and Bayesian methods, are then proposed for the markers showing at least moderate associations (p-value <10−5) based on a robust test. One is Bayesian model averaging using a posterior weighted likelihood and the other is the maximum BF using a profile likelihood. The proposed hybrid BFs and p-values of robust tests do not depend on a single genetic model, but instead, consolidate information over a set of models. We compare the hybrid BFs with two existing BF approaches, including an existing Bayesian model averaging method, in terms of false and true positive rates by simulations. The results show that, for markers showing at least moderate associations, both the hybrid BFs have higher true positive rates than the two existing BFs, while all false positive rates are similar. Applications of the two hybrid BFs to the markers associated with bipolar disorder, type 2 diabetes and age-related macular degeneration are presented. Our hybrid BFs provide better and more robust measures to compare significantly associated markers within and across GWA studies.  相似文献   

12.
Smoothing spline ANOVA (SSANOVA) provides an approach to semiparametric function estimation based on an ANOVA type of decomposition. Wahba et al. (1995) decomposed the regression function based on a tensor sum decomposition of inner product spaces into orthogonal subspaces, so the effects of the estimated functions from each subspace can be viewed independently. Recent research related to smoothing spline ANOVA focuses on either frequentist approaches or a Bayesian framework for variable selection and prediction. In our approach, we seek “objective” priors especially suited to estimation. The prior for linear terms including level effects is a variant of the Zellner–Siow prior (Zellner and Siow, 1980), and the prior for a smooth effect is specified in terms of effective degrees of freedom. We study this fully Bayesian SSANOVA model for Gaussian response variables, and the method is illustrated with a real data set.  相似文献   

13.
The development of group sequential methods has produced multiple criteria that are used to guide the decision of whether a clinical trial should be stopped early given the data observed at the time of an interim analysis. However, the potential for time-varying treatment effects should be considered when monitoring survival endpoints. In order to quantify uncertainty in future treatment effects it is necessary to consider future alternatives which might reasonably be observed conditional upon data collected up to the time of an interim analysis. A method of imputation of future alternatives using a random walk approach that incorporates a Bayesian conditional hazards model and splits the prior distribution for model parameters across regions of sampled and unsampled support is proposed. By providing this flexibility, noninformative priors can be used over regions of sampled data while providing structure to model parameters over future time intervals. The result is that inference over areas of sampled support remains consistent with commonly used frequentist statistics while a rich class of predictive distributions of treatment effect over the maximal duration of a trial are generated to assess potential treatment effects which may be plausibly observed if the trial were to continue. Selected operating characteristics of the proposed method are investigated via simulation and the approach is applied to survival data stemming from trial 002 of the Community Programs for Clinical Research on AIDS (CPCRA) study.  相似文献   

14.

Neural state classification (NSC) is a recently proposed method for runtime predictive monitoring of hybrid automata (HA) using deep neural networks (DNNs). NSC trains a DNN as an approximate reachability predictor that labels an HA state x as positive if an unsafe state is reachable from x within a given time bound, and labels x as negative otherwise. NSC predictors have very high accuracy, yet are prone to prediction errors that can negatively impact reliability. To overcome this limitation, we present neural predictive monitoring (NPM), a technique that complements NSC predictions with estimates of the predictive uncertainty. These measures yield principled criteria for the rejection of predictions likely to be incorrect, without knowing the true reachability values. We also present an active learning method that significantly reduces the NSC predictor’s error rate and the percentage of rejected predictions. We develop two versions of NPM based, respectively, on the use of frequentist and Bayesian techniques to learn the predictor and the rejection rule. Both versions are highly efficient, with computation times on the order of milliseconds, and effective, managing in our experimental evaluation to successfully reject almost all incorrect predictions. In our experiments on a benchmark suite of six hybrid systems, we found that the frequentist approach consistently outperforms the Bayesian one. We also observed that the Bayesian approach is less practical, requiring a careful and problem-specific choice of hyperparameters.

  相似文献   

15.
The Bayesian approach is widely used in automatic target recognition (ATR) systems based on multisensor fusion technology. Problems in data fusion systems are complex by nature and can often be characterized by not only randomness but also fuzziness. However, in general, current Bayesian methods can only account for randomness. To accommodate complex natural problems with both types of uncertainties, it is profitable to improve the existing approach by incorporating fuzzy theory into classical techniques. In this paper, after representing both the individual attribute of the target in the model database and the sensor observation or report as the fuzzy membership function, a likelihood function is constructed to deal with fuzzy data collected by each sensor. A similarity measure is introduced to determine the agreement degree of each sensor. Based on the similarity measure, a consensus fusion approach (CFA) is developed to generate a global likelihood from the individual attribute likelihood for the whole sensor reports. A numerical example is illustrated to show the target recognition application of the fuzzy-Bayesian approach. The text was submitted by the authors in English.  相似文献   

16.
A new method for estimating the variance of noise for nonlinear regression is presented. The noise is modelled to be regional, i.e. its variance depends on the input, and it consists of two sources: measurement errors and inherent noise of the underlying function. Our approach consists of two neural networks using Bayesian methods, which are trained in sequence. It is orientated by the assumption of unbiased predictions of the mean and the confidence of network prognoses, which are used to predict the variance of noise. We demonstrate our approach on two toy and one real data sets.  相似文献   

17.
The potential important role of the prior distribution of the roughness penalty parameter in the resulting smoothness of Bayesian P-splines models is considered. The recommended specification for that distribution yields models that can lack flexibility in specific circumstances. In such instances, these are shown to correspond to a frequentist P-splines model with a predefined and severe roughness penalty parameter, an obviously undesirable feature. It is shown that the specification of a hyperprior distribution for one parameter of that prior distribution provides the desired flexibility. Alternatively, a mixture prior can also be used. An extension of these two models by enabling adaptive penalties is provided. The posterior of all the proposed models can be quickly explored using the convenient Gibbs sampler.  相似文献   

18.
The analysis of thematic accuracy of maps and classifications relies on the interpretation of the error matrix. The statistical basis of the analysis has to date been from a frequentist point of view. An alternative Bayesian approach, described here, has a number of attractive features. Perhaps most importantly, it allows us to combine prior information in our analyses. The interpretation of the Bayesian estimates and intervals is also more intuitive, and because our inferences revolve around a posterior distribution, we are able to summarise estimates even after complicated transformations. Two novel plots summarising the accuracy of a map are also introduced. The approach is illustrated by analysing accuracy data from a land use map for the Baffle catchment in Queensland, Australia. Analysis is carried out using a non-informative prior and an informative prior based on data previously collected for a neighbouring catchment. The use of the informative prior tended to increase the precision of key accuracy parameters.  相似文献   

19.
Quantile regression problems in practice may require flexible semiparametric forms of the predictor for modeling the dependence of responses on covariates. Furthermore, it is often necessary to add random effects accounting for overdispersion caused by unobserved heterogeneity or for correlation in longitudinal data. We present a unified approach for Bayesian quantile inference on continuous response via Markov chain Monte Carlo (MCMC) simulation and approximate inference using integrated nested Laplace approximations (INLA) in additive mixed models. Different types of covariate are all treated within the same general framework by assigning appropriate Gaussian Markov random field (GMRF) priors with different forms and degrees of smoothness. We applied the approach to extensive simulation studies and a Munich rental dataset, showing that the methods are also computationally efficient in problems with many covariates and large datasets.  相似文献   

20.
A model-based clustering method is proposed for clustering individuals on the basis of measurements taken over time. Data variability is taken into account through non-linear hierarchical models leading to a mixture of hierarchical models. We study both frequentist and Bayesian estimation procedures. From a classical viewpoint, we discuss maximum likelihood estimation of this family of models through the EM algorithm. From a Bayesian standpoint, we develop appropriate Markov chain Monte Carlo (MCMC) sampling schemes for the exploration of target posterior distribution of parameters. The methods are illustrated with the identification of hormone trajectories that are likely to lead to adverse pregnancy outcomes in a group of pregnant women.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号