首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
Longitudinal or repeated measures data with clumping at zero occur in many applications in biometrics, including health policy research, epidemiology, nutrition, and meteorology. These data exhibit correlation because they are measured on the same subject over time or because subjects may be considered repeated measures within a larger unit such as a family. They present special challenges because of the extreme non-normality of the distributions involved. A model for repeated measures data with clumping at zero, using a mixed-effects mixed-distribution model with correlated random effects, is presented. The model contains components to model the probability of a nonzero value and the mean of nonzero values, allowing for repeated measurements using random effects and allowing for correlation between the two components. Methods for describing the effect of predictor variables on the probability of nonzero values, on the mean of nonzero values, and on the overall mean amount are given. This interpretation also applies to the mixed-distribution model for cross-sectional data. The proposed methods are illustrated with analyses of effects of several covariates on medical expenditures in 1996 for subjects clustered within households using data from the Medical Expenditure Panel Survey.  相似文献   

2.
Health and safety studies that entail both incidence and magnitude of effects produce semi-continuous outcomes, in which the response is either zero or a continuous positive value. Zero-inflated left-censored models typically employ latent mixture constructions to allow different covariate processes to impact the incidence versus the magnitude. Assessment of the model, however, requires a focus on the observable characteristics. We employ a conditional decomposition approach, in which the model assessment is partitioned into two observable components: the adequacy of the marginal probability model for the boundary value and the adequacy of the conditional model for values strictly above the boundary. A conditional likelihood decomposition facilitates the statistical assessment. For corresponding residual and graphical analysis, the conditional mean and quantile functions for events above the boundary and the marginal probabilities of boundary events are investigated. Large sample standard errors for these quantities are derived for enhanced graphical assessment, and simulation is conducted to investigate the finite-sample behaviour. The methods are illustrated with data from two health-related safety studies. In each case, the conditional assessments identify the source for lack of fit of the previously considered model and thus lead to an improved model.  相似文献   

3.
Although considerable attention has been given to zero-inflated count data, research on zero-inflated lognormal data is limited. In this article, we consider a study to examine human sperm cell DNA damage obtained from single-cell electrophoresis (COMET assay) experiment in which the outcome measures present a typical example of log-normal data with excess zeros. The problem is further complicated by the fact that each study subject has multiple outcomes at each of up to three visits separated by six-week intervals. Previous methods for zero-inflated log-normal data are based on either simple experimental designs, where comparison of means of zero-inflated log-normal data across different experiment groups is of primary interest, or longitudinal measurements, where only one observation is available for each subject at each visit. Their methods cannot be applied when multiple observations per visit are possible and both inter- and intra-subject variations are present. Our zero-inflated model extends the previous methods by incorporating a hierarchical structure using latent random variables to take into account both inter- and intra-subject variations in zero-inflated log-normal data. An EM algorithm has been developed to obtain the Maximum likelihood estimates of the parameters and their standard errors can be estimated by parametric bootstrap. The model is illustrated using the COMET assay data.  相似文献   

4.
This paper is concerned with the marginal models associated with a given multivariate first-order autoregressive model. A general theory is developed to determine when reductions in the known orders of the marginal models will occur. When the auto-regressive coefficient matrix has repeated eigenvalues, there may be global reductions in the marginal models. Zeros in the eigenvectors and generalized eigenvectors of the auto-regressive coefficient matrix lead to local reductions in the marginal models. The case when the autoregressive parameter matrix has systematic zeros is also investigated.  相似文献   

5.
Abstract. Three linear methods for estimating parameter values of vector auto-regressive moving-average (VARMA) models which are in general at least an order of magnitude faster than maximum likelihood estimation are developed in this paper. Simulation results for different model structures with varying numbers of component series and observations suggest that the accuracy of these procedures is in most cases comparable with maximum likelihood estimation. Procedures for estimating parameter standard error are also discussed and used for identification of nonzero elements in the VARMA polynomial structures. These methods can also be used to establish the order of the VARMA structure. We note, however, that the primary purpose of these estimates is to generate initial estimates for the nonzero parameters in order to reduce subsequent computational time of more efficient estimation procedures such as exact maximum likelihood.  相似文献   

6.
Quantile autoregression (QAR) is particularly attractive for censored data. However, unlike the standard regression models, the autoregressive models must take account of censoring on both response and regressors. In this article, we show that the existing censored quantile regression methods produce consistent estimators for QAR models when using only the fully observed regressors. A new algorithm is proposed to provide a censored QAR estimator by adopting imputation methods. The algorithm redistributes probability mass of censored points appropriately and iterates towards self‐consistent solutions. Monte Carlo simulations and empirical applications are conducted to demonstrate merits of the proposed method.  相似文献   

7.
Motivated by problems encountered in studying treatments for drug dependence, where repeated binary outcomes arise from monitoring biomarkers for recent drug use, this article discusses a statistical strategy using Markov transition model for analyzing incomplete binary longitudinal data. When the mechanism giving rise to missing data can be assumed to be ;ignorable', standard Markov transition models can be applied to observed data to draw likelihood-based inference on transition probabilities between outcome events. Illustration of this approach is provided using binary results from urine drug screening in a clinical trial of baclofen for cocaine dependence. When longitudinal data have ;nonignorable' missingness mechanisms, random-effects Markov transition models can be used to model the joint distribution of the binary data matrix and the matrix of missingness indicators. Categorizing missingness patterns into those for occasional or ;intermittent' missingness and those for monotonic missingness or ;missingness due to dropout', the random-effects Markov transition model was applied to a data set containing repeated breath samples analyzed for expired carbon monoxide levels among opioid-dependent, methadone-maintained cigarette smokers in a smoking cessation trial. Markov transition models provide a novel reconceptualization of treatment outcomes, offering both intuitive statistical values and relevant clinical insights.  相似文献   

8.
Abstract

Fully sequential generalization of some group sequential tests currently in use in clinical trials are considered. The model is censored data with random staggered entry, and it is used for the comparison of two treatments. Results show that the sequential tests can outperform group sequential tests and provide more information about the nature of relationship of the two treatments. Furthermore, they allow complete freedom in scheduling data evaluation.  相似文献   

9.
There has been much debate about the relative merits of mixed effects and population-averaged logistic models. We present a different perspective on this issue by noting that the investigation of the relationship between these models for a given dataset offers a type of sensitivity analysis that may reveal problems with assumptions of the mixed effects and/or population-averaged models for clustered binary response data in general and longitudinal binary outcomes in particular. We present several datasets in which the following violations of assumptions are associated with departures from the expected theoretical relationship between these two models: 1) negative intra-cluster correlations; 2) confounding of the response-covariate relationship by cluster effects; and 3) confounding of autoregressive relationships by the link between baseline outcomes and subject effects. Under each of these conditions, the expected theoretical attenuation of the population-averaged odds ratio relative to the cluster-specific odds ratio does not necessarily occur. In all cases, the naive fitting of a random intercept logistic model appears to lead to bias. In response, the random intercept model is modified to accommodate negative intra-cluster correlations, confounding due to clusters, or baseline correlations with random effects. Comparisons are made with GEE estimation of population-averaged models and conditional likelihood estimation of cluster-specific models. Several examples, including a cross-over trial, a multicentre nonrandomized treatment study, and a longitudinal observational study are used to illustrate these modifications.  相似文献   

10.
An improved analytical model able to simulate normal ballistic impacts of blunt shaped equivalent projectiles against ceramic tiles, without any backing plate, is described. The projectile is modelled as a rigid body with a deformable tip that can be eroded when in contact with the ceramic, that in turn is eroded and comminuted when critical conditions are reached. The modified Bernoulli equation relates the velocities of the deformable projectile and the ceramic assuming a hydrodynamic behaviour of the parts in contact, in proximity of the impact area. Innovative contributions are added in order to reproduce also low velocity impacts (below the Bernoulli limit), a range in which the modified Bernoulli equation can’t be applied. The model is compared with experimental data both from the literature and from ballistic tests, with actual bullets, performed by authors, predicting the residual velocity and the residual mass of the projectile (when the data is available), with also an analysis of the evolution in time of the parameters.  相似文献   

11.
Random-effects models for multivariate repeated measures   总被引:2,自引:0,他引:2  
Mixed models are widely used for the analysis of one repeatedly measured outcome. If more than one outcome is present, a mixed model can be used for each one. These separate models can be tied together into a multivariate mixed model by specifying a joint distribution for their random effects. This strategy has been used for joining multivariate longitudinal profiles or other types of multivariate repeated data. However, computational problems are likely to occur when the number of outcomes increases. A pairwise modeling approach, in which all possible bivariate mixed models are fitted and where inference follows from pseudo-likelihood arguments, has been proposed to circumvent the dimensional limitations in multivariate mixed models. An analysis on 22-variate longitudinal measurements of hearing thresholds illustrates the performance of the pairwise approach in the context of multivariate linear mixed models. For generalized linear mixed models, a data set containing repeated measurements of seven aspects of psycho-cognitive functioning will be analyzed.  相似文献   

12.
The pore size distributions in cement pastes and mortars, over the range of pore sizes determined by high-pressure mercury intrusion porosimetry (MIP), can be described in terms of a multimodal distribution by using lognormal simulation. The pore size distribution may be regarded as a mixture of lognormal distributions. Such a mixture is defined by a compound density function: p ( x ) =Σ fi p ( x , μ i , σ i ), Σ fi = 1, where x is the pore diameter, fi , is the weighting factor of the i th lognormal subdistribution of pore sizes, p ( x , μ i , σ i ), and μ i and σ i are the location parameter and the shape parameter of the i th subdistribution, respectively. It may indicate that different origins and formation mechanisms exist for pores in different size ranges in cementitious materials. A graphical method is proposed to estimate the parameters for the compound distribution. Applications of this model to prediction of permeability are discussed.  相似文献   

13.
Modelling risks in disease mapping   总被引:1,自引:0,他引:1  
In this article, we propose a strategy of analysis of mortality data with the aim of providing a guideline for epidemiologists and public health researchers to choose a reasonable model for estimating mortality (or incidence) risks. Maps displaying the crude mortality rates or ratios are usually misleading because of the instability of the estimators in low populated areas. As an alternative, many smoothing methods have been presented in the literature based on Poisson inference. They account for the extra-Poisson variation (overdispersion), frequently present in the homogeneous Poisson model, by incorporating random effects. Here, we recommend to test for the potential sources of extra-Poisson variation because, depending on them, the models which fit better the data may be different. Overdispersion can be mainly due to spatial autocorrelation, unstructured heterogeneity or to a combination of these two, and also, when studying very rare diseases, it can be due to an excess of zeros in the data. In this article, different situations the analyst may encounter are detailed and appropriate procedures for each case are presented. The alternative models are illustrated using mortality data provided by the Statistical Institute of Navarra, Spain.  相似文献   

14.
This article investigates the role of proxy data in dealing with the common problem of missing data in clinical trials using repeated measures designs. In an effort to avoid the missing data situation, some proxy information can be gathered. The question is how to treat proxy information, that is, is it always better to utilize proxy information when there are missing data? A model for repeated measures data with missing values is considered and a strategy for utilizing proxy information is developed. Then, simulations are used to compare the power of a test using proxy to simply utilizing all available data. It is concluded that using proxy information can be a useful alternative when such information is available. The implications for various clinical designs are also considered and a data collection strategy for efficiently estimating parameters is suggested.  相似文献   

15.

Recent research has indicated that the toxicity of inhaled ultrafine particles may be associated with the size of discrete particles deposited in the lungs. However, it has been speculated that in some occupational settings rapid coagulation will lead to relatively low exposures to discrete ultrafine particles. Investigation of likely occupational exposures to ultrafine particles following the generation of aerosols with complex size distributions is most appropriately addressed using validated numerical models. A numerical model has been developed to estimate the size-distribution time-evolution of compact and fractal-like aerosols within workplaces resulting from coagulation, diffusional deposition, and gravitational settling. Good agreement has been shown with an analytical solution to lognormal aerosol evolution, indicating good compatibility with previously published models. Validation using experimental data shows reasonable agreement when assuming spherical particles and coalescence on coagulation. Assuming the formation of fractal-like particles within a range of diameters led to good agreement between modeled and experimental data. The model appears well suited to estimating the relationship between the size distribution of emitted well-mixed ultrafine aerosols, and the aerosol that is ultimately inhaled where diffusion loses are small.  相似文献   

16.
This paper presents a new design for a multi-channel electrical mobility spectrometer which measures the lognormal size distribution and number concentration of aerosol particles in the size range 5–300 nm with a short response time. The spectrometer charges particles in the test sample by unipolar corona discharge, they are then classified into 16 channels by electrical mobility. Charged particles are detected in the channels by individual aerosol electrometers, giving an electrical mobility spectrum for the sample.The main aspect of the spectrometer design is a wedge-shaped classifier with flat electrodes. This allows a flow to be drawn from the classifier at 16 different levels/channels with minimal disturbance to the remaining flow, hence filter based aerosol electrometers can be used for detection. The varying field within the classifier caused by the wedge shape is advantageous to the classification and optimised through the selection of the wedge angle.Also presented is an alternative technique for inferring the lognormal size distribution of an aerosol from a measured electrical mobility spectrum. This involves using a theoretical model of the instrument to simulate the output mobility spectra for a large number of aerosol samples with lognormal size distributions. The resulting data library can be searched against a measured electrical mobility spectrum to find the corresponding size distribution.The experimental work presented in this paper is a first evaluation of this spectrometer and includes measurement of the classifier transfer functions, basic calibration of the charger, and finally testing the spectrometer's performance on some simple unimodal lognormal aerosol samples.  相似文献   

17.
Statistical reaction models have been used to fit C-NMR spectra for ethylene/1-octene copolymers and to describe the polymerization reaction probabilities. Ten models ranging in complexity from a one-site Bernoulli probability to multiple site second-order Markov systems were studied. Model parameters were determined by fitting the experimental integrations of replicated spectra using a maximum likelihood method. The best fit to the experimental NMR spectra was obtained with a two-site model, one site producing mainly high-density polymer following a Bernoulli probability model, while the second site allows more incorporation of octene following a chain-end controlled probability described by first-order Markov statistics. © 1994 John Wiley & Sons, Inc.  相似文献   

18.
A simplified procedure is presented to estimate the two adjustable parameters in a lognormal distribution function. From values of these parameters, the weight distribution function, W(M), as well as various molecular weight averages can be calculated. The method was applied to fractionation data selected at random for several polymers. The agreement between calculated and reported values appears to be good.  相似文献   

19.
周云龙  张全厚  辛凯 《化学工程》2012,40(12):65-69
利用一种改进的防堵孔板和沿程管段的组合装置,基于考虑滑移比影响的均相流模型,应用伯努力方程和连续性方程推导出影响气液二相流干度的因素为p和Δp1-2/Δp3-4,建立了干度的测量模型;并通过支持向量机(SVM)以p和Δp1-2/Δp3-4作为输入变量,干度为输出值,得到干度的测量相对误差≤3.6%,在结合林氏流量测量模型后,在实验测量参数范围内,总质量流量的测量相对误差≤8.3%。实现了运用一个节流件完成流量的双参数测量,并完成了锥形孔板流出系数的标定和林氏模型中参数θ的拟合公式,分析得出它是影响林氏模型测量误差的重要因素。  相似文献   

20.
The validity of a stationary time series model may be measured by the goodness of fit of the spectral distribution function. Anderson (Technical Report 27, 1991; Technical Report 309, 1995; Stanford University) has worked out the closed-form characteristic functions for the Cramer–von Mises criterion for general linear processes, under the condition that all values of the parameters are specified. The asymptotic approach is not easily implemented and usually requires a case by case analysis. In this paper we propose a bootstrap goodness-of-fit test in the frequency domain. By properly resampling the residuals, we can consistently estimate the p values for many weakly dependent semiparametric models with unspecified parameter values. This is the content of the main theorem that we try to explain. A group of simulations is conducted, showing consistent significance level and good power. The special tests are applied to the lynx data and reveal structure unexplained by the AR(1) model fitted by Tong ( J. R. Stat. Soc. A 140 (1977), 432–36). A possible generalization with application to financial data analysis is also discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号