首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The hazard function, also called the risk function or intensity function, is usually used to model survival data or other waiting times, such as unemployment times. In contrast to the proportional hazard model, the additive risk model assumes that the hazard function is the sum of rather than the product of, the baseline hazard function and a non-negative function of covariates. We propose to introduce the covariates into the model through a Gamma hazard function, while the baseline hazard function is left unspecified. Following the Bayesian paradigm, we obtain an approximation to the posterior distribution using Markov Chain Monte Carlo techniques. The subject-specific survival estimation is also studied. A real example using unemployment data is provided. This work was partially supported by the Spanish Education and Science Council Project PB96-0776.  相似文献   

2.
Reliable predictive accident models (PAMs) (also referred to as Safety Performance Functions (SPFs)) have a variety of important uses in traffic safety research and practice. They are used to help identify sites in need of remedial treatment, in the design of transport schemes to assess safety implications, and to estimate the effectiveness of remedial treatments. The PAMs currently in use in the UK are now quite old; the data used in their development was gathered up to 30 years ago. Many changes have occurred over that period in road and vehicle design, in road safety campaigns and legislation, and the national accident rate has fallen substantially. It seems unlikely that these ageing models can be relied upon to provide accurate and reliable predictions of accident frequencies on the roads today. This paper addresses a number of methodological issues that arise in seeking practical and efficient ways to update PAMs, whether by re-calibration or by re-fitting. Models for accidents on rural single carriageway roads have been chosen to illustrate these issues, including the choice of distributional assumption for overdispersion, the choice of goodness of fit measures, questions of independence between observations in different years, and between links on the same scheme, the estimation of trends in the models, the uncertainty of predictions, as well as considerations about the most efficient and convenient ways to fit the required models.  相似文献   

3.
Non-parametric estimation of conditional moments for sensitivity analysis   总被引:1,自引:0,他引:1  
In this paper, we consider the non-parametric estimation of conditional moments, which is useful for applications in global sensitivity analysis (GSA) and in the more general emulation framework. The estimation is based on the state-dependent parameter (SDP) estimation approach and allows for the estimation of conditional moments of order larger than unity. This allows one to identify a wider spectrum of parameter sensitivities with respect to the variance-based main effects, like shifts in the variance, skewness or kurtosis of the model output, so adding valuable information for the analyst, at a small computational cost.  相似文献   

4.
The advent of Markov Chain Monte Carlo (MCMC) methods to simulate posterior distributions has virtually revolutionized the practice of Bayesian statistics. Unfortunately, sensitivity analysis in MCMC methods is a difficult task. In this paper, a computationally low-cost method to estimate local parametric sensitivities in Bayesian models is proposed. The sensitivity measure considered here is the gradient vector of a posterior quantity with respect to the parameter. The gradient vector components are estimated by using a result based on the integral/derivative interchange. The MCMC simulations used to estimate the posterior quantity can be re-used to estimate the sensitivity measures and their errors, avoiding the need for further sampling. The proposed method is easy to apply in practice as it is shown with an illustrative example.  相似文献   

5.
Mixture models are receiving considerable significance in the last years. Practical situations in reliability and survival analysis may be addressed by using mixture models. When making inferences on them, besides the estimates of the parameters, a sensitivity analysis is necessary. In this paper, a general technique to estimate local prior sensitivities in finite mixtures of distributions from natural exponential families having quadratic variance function (NEF-QVF) is proposed. Those families include some distributions of wide use in reliability theory. An advantage of this method is that it allows a direct implementation of the sensitivity measure estimates and their errors. In addition, the samples that are drawn to estimate the parameters in the mixture model are re-used to estimate the sensitivity measures and their errors. An illustrative application based on insulating fluid failure data is shown.  相似文献   

6.
7.
Dynamic pricing models for electronic business   总被引:2,自引:0,他引:2  
Dynamic pricing is the dynamic adjustment of prices to consumers depending upon the value these customers attribute to a product or service. Today's digital economy is ready for dynamic pricing; however recent research has shown that the prices will have to be adjusted in fairly sophisticated ways, based on sound mathematical models, to derive the benefits of dynamic pricing. This article attempts to survey different models that have been used in dynamic pricing. We first motivate dynamic pricing and present underlying concepts, with several examples, and explain conditions under which dynamic pricing is likely to succeed. We then bring out the role of models in computing dynamic prices. The models surveyed include inventory-based models, data-driven models, auctions, and machine learning. We present a detailed example of an e-business market to show the use of reinforcement learning in dynamic pricing.  相似文献   

8.
The basic reproduction number R 0 is one of the most important concepts in modern infectious disease epidemiology. However, for more realistic and more complex models than those assuming homogeneous mixing in the population, other threshold quantities can be defined that are sometimes more useful and easily derived in terms of model parameters. In this paper, we present a model for the spread of a permanently immunizing infection in a population socially structured into households and workplaces/schools, and we propose and discuss a new household-to-household reproduction number R H for it. We show how R H overcomes some of the limitations of a previously proposed threshold parameter, and we highlight its relationship with the effort required to control an epidemic when interventions are targeted at randomly selected households.  相似文献   

9.
This paper presents an assessment of the efficiency of the Kriging interpolation models as surrogate models for structural reliability problems involving time-consuming numerical models such as nonlinear finite element analysis structural models. The efficiency assessment is performed through a systematic comparison of the accuracy of the failure probability predictions based on the first-order reliability method using the most common first- and second-order polynomial regression models and the Kriging interpolation models as surrogates for the true limit state function. An application problem of practical importance in the field of marine structures that requires the evaluation of a nonlinear finite element structural model is adopted as numerical example. The accuracy of the failure probability predictions is characterised as a function of the number of support points, dispersion of the support points in relation to the so-called design point and order of the Kriging basis functions. It is shown with the application problem considered that the Kriging interpolation models are efficient surrogate models for structural reliability problems and can provide significantly more accurate failure probability predictions as compared with the most common polynomial regression models.  相似文献   

10.
A new experimental procedure is proposed which allows the determination of parameters of mass-spring models used to analyse the CHARPY impact test. It is based on the measurement of the tup load and the specimen deflection during the impact test. The contact stiffness between the tup and the specimen is derived from the ratio of the specimen deflection over the tup displacement. The model predictions are compared with experimental results obtained from impact tests performed on PMMA specimens. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

11.
Despite many advances in the field of computational reliability analysis, the efficient estimation of the reliability of a system with multiple failure modes remains a persistent challenge. Various sampling and analytical methods are available, but they typically require accepting a tradeoff between accuracy and computational efficiency. In this work, a surrogate-based approach is presented that simultaneously addresses the issues of accuracy, efficiency, and unimportant failure modes. The method is based on the creation of Gaussian process surrogate models that are required to be locally accurate only in the regions of the component limit states that contribute to system failure. This approach to constructing surrogate models is demonstrated to be both an efficient and accurate method for system-level reliability analysis.  相似文献   

12.
Predictive models of epidemic cholera need to resolve at suitable aggregation levels spatial data pertaining to local communities, epidemiological records, hydrologic drivers, waterways, patterns of human mobility and proxies of exposure rates. We address the above issue in a formal model comparison framework and provide a quantitative assessment of the explanatory and predictive abilities of various model settings with different spatial aggregation levels and coupling mechanisms. Reference is made to records of the recent Haiti cholera epidemics. Our intensive computations and objective model comparisons show that spatially explicit models accounting for spatial connections have better explanatory power than spatially disconnected ones for short-to-intermediate calibration windows, while parsimonious, spatially disconnected models perform better with long training sets. On average, spatially connected models show better predictive ability than disconnected ones. We suggest limits and validity of the various approaches and discuss the pathway towards the development of case-specific predictive tools in the context of emergency management.  相似文献   

13.
In the causal analysis of survival data a time-based response is related to a set of explanatory variables. However, selection and proper design of the latter may become a difficult task, particularly in the preliminary stage, when the information is limited. We propose an alternative nonparametric approach to estimate the survival function which allows one to evaluate the relative importance of each potential explanatory variable, in a simple and exploratory fashion. To achieve this aim, each of the explanatory variables is used to partition the observed survival times. The observations are assumed to be partially exchangeable according to such partition. We then consider, conditionally on each partition, a hierarchical nonparametric Bayesian model on the hazard functions. In order to measure the importance of each explanatory variable, we derive the posterior probability of the corresponding partition. Such probabilities are then employed to estimate the hazard functions by averaging the estimated conditional hazard over the set of all entertained partitions.  相似文献   

14.
A nonhomogeneous Markov process is applied for analysing a cohort of women with breast cancer that were submitted to surgery. The follow-up was scheduled every month. Three states are considered: no relapse, relapse and death. As relapse times change over time, we have extended previous approaches for a time-homogeneous model to a nonhomogeneous multistate process. The transition intensity functions among states are the hazard rate functions of different lognormal distributions; we therefore build the likelihood function for this model, estimate the parameters and compare the empirical and nonhomogeneous models in terms of the survival probability functions. The parameter estimation is done following the maximum likelihood method. The effect of treatments is incorporated as covariates by means of the lognormal hazard rate functions, following the proportional hazard model. Thus, we have a multistate model with multidimensional covariates. Survival functions for the different cohorts submitted to treatments are obtained and goodness-of-fit tests are performed. The first author gratefully acknowledges the financial support by DGES, Proyecto PB97-0827, Ministerio de Educación y Cultura, Spain.  相似文献   

15.
For the practical implementation of computer simulation or analysis of human motion in ergonomics, orthopaedics, sports, and other areas, the respective models must be individualized by assigning them subject-specific parameter values such as those for segment parameters. Several methods and their efficiency are discussed for determining this parameter set for a given subject. It is shown that the anthropometrico-computational method is presently the most accurate and reliable technique with potential for further improvement.  相似文献   

16.
A method is presented to build reduced (equivalent) models of stiffened panels made of thin-walled composite materials. The technique is developed to be used in the modal analysis of panels and wing boxes, allowing finite element modelling and analysis using a single-type, three-dimensional orthotropic p-element. The use of a single element guarantees speed and flexibility in the (re)modelling of the structure and reduces the modelling and analysis errors connected to finite element analysis in preliminary-design/multidisciplinary-optimization environments. The method is tested on two types of representative wing boxes. Different approaches for the equivalencing are tested and compared to each other. The results show that the equivalent models give results within few percent from those obtained running a full model, saving as much as one order of magnitude in the number of degrees of freedom employed.  相似文献   

17.
In this contribution an algorithm for parameter identification of thermoelastic damage models is proposed, in which non‐uniform distributions of the state variables such as stresses, strains, damage variables and temperature are taken into account. To this end a least‐squares functional consisting of experimental data and simulated data is minimized, whereby the latter are obtained with the finite‐element method. In order to improve the efficiency of the minimization process, a gradient‐based optimization algorithm is applied, and therefore the corresponding sensitivity analysis for the coupled variational problem is described in a systematic manner. For illustrative purpose, the performance of the algorithm is demonstrated for a non‐homogeneous shear problem with thermal loading. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

18.
Deadlocks constitute a major issue in the desing and operation of discrete event systems. In automated manufacturing systems, deadlocks assume even greater importance in view of the automated operation. In this paper, we show that Markov chains with absorbing states provide a natural model of manufacturing systems with deadlocks. With illustrative examples, we show that performance indices such as mean time to deadlock and mean number of finished parts before deadlock can be efficiently computed in the modelling framework of Markov chains with absorbing states. We also show that the distribution of time to deadlock can be computed by conducting a transient analysis of the Markov chain model.  相似文献   

19.
Getachew A. Dagne 《TEST》2001,10(2):375-391
Sample surveys are usually designed and analyzed to produce estimates for larger areas. However, sample sizes are often not large enough to give adequate precision for small area estimates of interest. To overcome such difficulties,borrowing strength from related small areas via modeling becomes an appropriate approach. In line with this, we propose hierarchical models with power transformations for improving the precision of small area predictions. The proposed methods are applied to satellite data in conjunction with survey data to estimate mean acreage under a specified crop for counties in Iowa.  相似文献   

20.
We present a new statistical approach to analyse epidemic time-series data. A major difficulty for inference is that (i) the latent transmission process is partially observed and (ii) observed quantities are further aggregated temporally. We develop a data augmentation strategy to tackle these problems and introduce a diffusion process that mimics the susceptible-infectious-removed (SIR) epidemic process, but that is more tractable analytically. While methods based on discrete-time models require epidemic and data collection processes to have similar time scales, our approach, based on a continuous-time model, is free of such constraint. Using simulated data, we found that all parameters of the SIR model, including the generation time, were estimated accurately if the observation interval was less than 2.5 times the generation time of the disease. Previous discrete-time TSIR models have been unable to estimate generation times, given that they assume the generation time is equal to the observation interval. However, we were unable to estimate the generation time of measles accurately from historical data. This indicates that simple models assuming homogenous mixing (even with age structure) of the type which are standard in mathematical epidemiology miss key features of epidemics in large populations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号