首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Random-effects models for multivariate repeated measures   总被引:2,自引:0,他引:2  
Mixed models are widely used for the analysis of one repeatedly measured outcome. If more than one outcome is present, a mixed model can be used for each one. These separate models can be tied together into a multivariate mixed model by specifying a joint distribution for their random effects. This strategy has been used for joining multivariate longitudinal profiles or other types of multivariate repeated data. However, computational problems are likely to occur when the number of outcomes increases. A pairwise modeling approach, in which all possible bivariate mixed models are fitted and where inference follows from pseudo-likelihood arguments, has been proposed to circumvent the dimensional limitations in multivariate mixed models. An analysis on 22-variate longitudinal measurements of hearing thresholds illustrates the performance of the pairwise approach in the context of multivariate linear mixed models. For generalized linear mixed models, a data set containing repeated measurements of seven aspects of psycho-cognitive functioning will be analyzed.  相似文献   

2.
A software interface for performing on-the-fly quantum and force field calculations has been developed and integrated into RMG, an open-source reaction mechanism generation software package, to provide needed estimates of thermodynamic parameters. These estimates based on three-dimensional molecular geometries bypasses the traditional group-additivity-based approach, which can suffer from lack of availability of necessary parameters; this issue is particularly evident for polycyclic species with fused rings, which would require ad hoc ring corrections in the group-additivity framework. In addition to making extensive use of open-source tools, the interface takes advantage of recent developments from several fields, including three-dimensional geometry embedding, force fields, and chemical structure representation, along with enhanced robustness of quantum chemistry codes. The effectiveness of the new approach is demonstrated for a computer-constructed model of combustion of the synthetic jet fuel JP-10. The interface also establishes a framework for future improvements in the chemical fidelity of computer-generated kinetic models.  相似文献   

3.
Multi-state models for the analysis of time-to-event data   总被引:1,自引:0,他引:1  
The experience of a patient in a survival study may be modelled as a process with two states and one possible transition from an "alive" state to a "dead" state. In some studies, however, the "alive" state may be partitioned into two or more intermediate (transient) states, each of which corresponding to a particular stage of the illness. In such studies, multi-state models can be used to model the movement of patients among the various states. In these models issues, of interest include the estimation of progression rates, assessing the effects of individual risk factors, survival rates or prognostic forecasting. In this article, we review modelling approaches for multi-state models, and we focus on the estimation of quantities such as the transition probabilities and survival probabilities. Differences between these approaches are discussed, focussing on possible advantages and disadvantages for each method. We also review the existing software currently available to fit the various models and present new software developed in the form of an R library to analyse such models. Different approaches and software are illustrated using data from the Stanford heart transplant study and data from a study on breast cancer conducted in Galicia, Spain.  相似文献   

4.
Several different rate functions of the recurrent event process have been proposed for analysing recurrent event data when the observation of a study subject can be terminated by a failure event, such as death. When the terminal event is correlated with the underlying recurrent event process, these rate functions have different interpretations; however, recognition of the differences has been lacking theoretically and practically. In this article, we study the relationship between these rate functions and demonstrate that models based on an inappropriate rate function may lead to misleading scientific conclusions in various scenarios. An analysis of data from an AIDS clinical trial is presented to emphasise the importance of cautious model selection.  相似文献   

5.
The analysis of longitudinal data with non-ignorable missingness remains an active area in biostatistics research. This article discusses various random effects and latent process models which have been proposed for analyzing longitudinal binary data subject to both non-ignorable intermittent missing data and dropout. These models account for non-ignorable missingness by introducing random effects or a latent process which is shared between the response model and the model for the missing-data mechanism. We discuss various random effects and latent processes approaches and compare these approaches with analyses from an opiate clinical trial data set, which had high proportion of intermittent missingness and dropout. We also compare these random effect and latent process approaches with other methods for accounting for non-ignorable missingness using this data set.  相似文献   

6.
There has been much debate about the relative merits of mixed effects and population-averaged logistic models. We present a different perspective on this issue by noting that the investigation of the relationship between these models for a given dataset offers a type of sensitivity analysis that may reveal problems with assumptions of the mixed effects and/or population-averaged models for clustered binary response data in general and longitudinal binary outcomes in particular. We present several datasets in which the following violations of assumptions are associated with departures from the expected theoretical relationship between these two models: 1) negative intra-cluster correlations; 2) confounding of the response-covariate relationship by cluster effects; and 3) confounding of autoregressive relationships by the link between baseline outcomes and subject effects. Under each of these conditions, the expected theoretical attenuation of the population-averaged odds ratio relative to the cluster-specific odds ratio does not necessarily occur. In all cases, the naive fitting of a random intercept logistic model appears to lead to bias. In response, the random intercept model is modified to accommodate negative intra-cluster correlations, confounding due to clusters, or baseline correlations with random effects. Comparisons are made with GEE estimation of population-averaged models and conditional likelihood estimation of cluster-specific models. Several examples, including a cross-over trial, a multicentre nonrandomized treatment study, and a longitudinal observational study are used to illustrate these modifications.  相似文献   

7.
Drying data of apple and pear at 40, 50, 60, 70, and 80°C were described by the Weibull model and it was observed that the shape parameter of the Weibull model did not depend on temperature. Therefore the reduced Weibull model with fixed shape parameter was proposed as the primary model to describe drying data with a slight loss of goodness-of-fit. Temperature dependence of time-parameter (time necessary to reduce the initial moisture ratio by 90%) could be described by two ad hoc models as the secondary models. Predictions using the integrated models almost perfectly agreed with the experimental drying data of apple at 45 and 65°C and of pear at 55 and 75°C, respectively. Kinetic analyses with published data have shown that the reduced Weibull model can also successfully be used to describe the drying data of certain fruits. Time-parameters tabulated in this study can be useful for food manufacturers.  相似文献   

8.
The application of a model of fine particles initial deposition from a flowing suspension on smooth surfaces is discussed by comparison with literature experimental data and simplified models (Leveque equation). The model and its original features, including an accurate account of particle-surface interactions and ad hoc solution techniques, with special emphasis on the treatment of boundaries, have been thoroughly presented in Part I. The model demonstrates that in many circumstances diffusion is the limiting mechanism so that simple models based on a continuous approach (through particles concentration) together with perfect sink assumption are accurate enough. Departures from such circumstances are identified by means of a parametric study based on our model. The comparison with the experimental data also suggests additional characterizations needed for future experimental investigations.  相似文献   

9.
Hamilton (A standard error for the estimated state vector of a state-space model. J. Economet. 33 (1986), 387–97) and Ansley and Kohn (Prediction mean squared error for state space models with estimated parameters. Biometrika 73 (1986), 467–73) have both proposed corrections to the naive approximation (obtained via substitution of the maximum likelihood estimates for the unknown parameters) of the Bayesian prediction mean squared error (MSE) for state space models, when the model's parameters are estimated from the data. Our work extends theirs in that we propose enhancements by identifying missing terms of the same order as that in their corrections. Because the approximations to the MSE are often subject to a frequentist interpretation, we compare our proposed enhancements with their original versions and with the naive approximation through a simulation study. For simplicity, we use the random walk plus noise model to develop the theory and to get our empirical results in the main body of the text. We also illustrate the differences between the various approximations with the Purse Snatching in Chicago series. Our empirical results show that (i) as expected, the underestimation in the naive approximation decreases as the sample size increases; (ii) the improved Ansley–Kohn approximation is the best compromise considering theoretical exactness, bias, precision and computational requirements, though the original Ansley–Kohn method performs quite well; finally, (iii) both the original and the improved Hamilton methods marginally improve the naive approximation. These conclusions also hold true with the Purse Snatching series.  相似文献   

10.
Online glucose prediction which can be used to provide important information of future glucose status is a key step to facilitate proactive management before glucose reaches undesirable concentrations. Based on frequency‐band separation (FS) and empirical modeling approaches, this article considers several important aspects of on‐line glucose prediction for subjects with type 1 diabetes mellitus. Three issues are of particular interest: (1) Can a global (or universal) model be developed from glucose data for a single subject and then used to make suitably accurate on‐line glucose predictions for other subjects? (2) Does a new FS approach based on data filtering provide more accurate models than standard modeling methods? (3) Does a new latent variable modeling method result in more accurate models than standard modeling methods? These and related issues are investigated by developing autoregressive models and autoregressive models with exogenous inputs based on clinical data for two groups of subjects. The alternative modeling approaches are evaluated with respect to on‐line short‐term prediction accuracy for prediction horizons of 30 and 60 min, using independent test data. © 2013 American Institute of Chemical Engineers AIChE J 60: 574–584, 2014  相似文献   

11.
Recent publication and presentations extol the virtues of filler pigments as titanium dioxide extenders. Theory predicts and experiment shows that in plastic systems, where pigment concentrations are relatively low, compared to paints and inks, fillers do not significantly improve the optical efficiency of titanium dioxide. Close examination of published data shows that there are many unanswered questions such as: How do variations in compounding conditions affect efficiency? Has sufficient attention been paid to measurement of light reflectance and transmission? Have the effects of light scattering and absorption been taken into account to explain optical measurements? What we have found is that there is no easy “fix” to improve the efficiency of titanium dioxide by the use of filler pigments. Serious questions also remain unanswered regarding the effect of ad hoc replacement of TiO2 with filler in systems requiring light stability such as rigid polyvinyl building products.  相似文献   

12.
Recent studies of the dynamic mechanical behavior of ultra high modulus polyethylene? are discussed in the context of our present understanding of the structure of these materials. In particular, the Takayanagi model is shown to achieve a new status in the light of direct measurements of crystal continuity from wide angle X-ray diffraction data. It is further shown that the Takayanagi model in one formulation is compatible with the Cox model for a short fiber reinforced composite. The fiber composite model offers a simple physical understanding of the fall in modulus due to the α-relaxation in terms of shear lag. This reduces the effectiveness of the continuous crystal fraction postulated in the Takayanagi model. The γ-relaxation is considered to be associated primarily with an amorphous relaxation, consistent with the conclusions of previous workers for materials of lower draw ratio.  相似文献   

13.
The dynamic shear behaviour of oriented linear polyethylene has been studied with particular reference to previous studies of the dynamic tensile modulus. First, it has been shown that the increase in the ?50°C plateau shear modulus with draw ratio can be understood on a Takayanagi-type model in terms of an increase in crystal continuity. The crystal continuity is estimated from the longitudinal crystal thickness and the long period on the basis of the random crystalline bridge model. At a similar level of sophistication it is also possible to explain the cross-over in the ranking of samples of increasing draw ratio with change of temperature. The dynamic mechanical behaviour is then considered in terms of a simple extension of this Takayanagi model in which crystalline sequences which span two or more adjacent lamellae are regarded as the fibre phase in a short fibre composite. It can be shown that this model gives a satisfactory prediction of the changes in dynamic tensile modulus and loss with temperature, for a range of samples with different degrees of crystal continuity.  相似文献   

14.
This paper presents an extension of a general parametric class of transitional models of order p. In these models, the conditional distribution of the current observation, given the present and past history, is a mixture of conditional distributions, each of them corresponding to the current observation, given each one of the p-lagged observations. Such conditional distributions are constructed using bivariate copula models which allow for a rich range of dependence suitable to model non-Gaussian time series. Fixed and time varying covariates can be included in the models. These models have the advantage of straightforward construction and estimation for the analysis of time series and more general longitudinal data. A poliomyelitis incidence data set is used to illustrate the proposed methods, contrary to other researches' conclusions whose methods are mainly based on linear models, we find significant evidence of a decreasing trend in polio infection after accounting for seasonality.  相似文献   

15.
Scientific data, as a sequential or a simple random sample, often indicate a unimodal, right-skewed population. For such data, the ubiquitous symmetry assumption and the Gaussian model are inappropriate and in case of high skewness, even corrections using devices such as Box-Cox transformation are inadequate. In such cases, the recently introduced M-Gaussian distribution, which may be described as an R-symmetric Gaussian twin, with its mode as the centrality parameter, can be an appropriate model. In this article, the concept of R-symmetry, the basic properties of the M-Gaussian distribution and some analogies between Gaussian and M-Gaussian distributions are reviewed. Then the sequential probability ratio test (SPRT) for simple a hypothesis about the mode of an M-Gaussian population assuming the dispersion parameter to be known is derived. The average sample number (ASN) and operating characteristic (OC) function are obtained and the robustness properties of the test with respect to the harmonic variance assumption are studied. The results are compared with the existing parallel studies for the mean of the inverse Gaussian (IG) distribution.  相似文献   

16.
When characterizing solutions of random coil polymers by static light scattering (SLS) or dynamic light scattering (DLS), linear regression is used to fit experimental data to theoretical relationships. These relationships are expressed as polynomial equations, which contain two independent variables—sample concentration and scattering angle—and a response or dependent variable that is related to radiation intensities (SLS) or intensity fluctuations (DLS). The coefficients of the terms in the polynomial are used to estimate parameters such as molecular weight and polymer coil radius of gyration. One major problem during data analysis involves deciding which polynomial model is appropriate for use with the data that inherently contains a high level of random noise that is produced by the presence of dust in the solutions. Dust is an especially troublesome problem when dealing with large polymers in aqueous solutions. Polynomial models having more terms than justified are unacceptable because the coefficients of these models are excessively corrupted by the noise. Thus, conclusions from unjustified models can be erroneous. This article discusses use of a factorial experimental design technique that obtains an acceptable model for fitting light scattering data containing high levels of random noise. © 1995 John Wiley & Sons, Inc.  相似文献   

17.
18.
Suppose we wish to estimate the mean of some polynomial function of random variables from two independent Bernoulli populations, the parameters of which. rhemselves, are modeled as independent beta random variables. It. is assumed that the t.otal sample size for the experiment is fixed, but that the number of experimental units observed from each population may be random. This problem arises, ior example, when estimating the fault tolerance of a system by testing its compomentc individually. Using a decision theorebic approach, we seek to minimize the Bayes risk that arises from using a squared error loss function The Bayes estimator can lw detrmined in a straightforwardmanner, so the problem of optimal estimation rcduces. therefore, to a problem of optimallocatton of the samples between the two populatiorls. This can be solved via dynamic programming. Similar programming techniques are utilized to evaluate properties of a number of ad hoc allocation strategies that might also be collsidered for use in this problem.Two sample polynomials are analyzed along with a number of examples indicating the effects of different prior parameter settings. The effects of differences between prior pararueters used in the design and analysis stages of the experiment are also examined. For the polynomials considered, the adaptive strategies are found to be especially robust. We discuss computational techniques that facilitate such analyses by permitting rapid re-evaluation of strategies. Capabilities of this sort enrouragepeople to explorr designs more fully and to consider them from a number of different viewpuillts.  相似文献   

19.
Nowadays environmental regulations of fossil fuels emissions impose stricter limits for contaminants such as sulfur, nitrogen and aromatics from middle distillate petroleum fractions. The most important process used in oil refineries to reach the required specifications is catalytic hydrogenation. A key issue to optimize these units is the availability of reliable kinetic models for this complex, tri-phase reaction. A detailed, phenomenological model of the reactor would demand an exceeding experimental effort for consistently estimating all the necessary kinetic and transport parameters. Thus, a simplified approach is generally used for routine assessment of new catalysts and/or new streams to be processed. Due to the difficulty of characterization of these streams, which are very complex mixtures of numerous species, most models are based on pseudo-components. This approach, however, does not allow for model generalization with respect to feed composition. This paper presents and discusses a new methodology for dealing with this problem. Conventional neural network (NN) training algorithms are used for inducing NNs to predict kinetic parameters of simplified models for the catalytic hydrodesulfurization (HDS) reaction, using macro properties of the feed as input. As in practice there are rarely enough experimental data to subsidize empirical learning algorithms, the paper proposes and describes an ad hoc methodology for artificially enlarging the initial scarce experimental data. Results from inferring kinetic parameters of the catalytic removal of sulfur using NNs, based on macro-properties of oil middle distillates, are presented and discussed.  相似文献   

20.
There has recently been an upsurge of interest in time series models for count data. Many papers focus on the model with first‐order (Markov) dependence and Poisson innovations. Our paper considers practical models that can capture higher‐order dependence based on the work of Joe (1996). In this framework we are able to model both equidispersed and overdispersed marginal distributions of data. The latter is approached using generalized Poisson innovations. Central to the models is the use of the property of closure under convolution of certain families of random variables. The models can be thought of as stationary Markov chains of finite order. Parameter estimation is undertaken by maximum likelihood, inference procedures are considered and means of assessing model adequacy employed. Applications to two new data sets are provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号