首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Adjustment for covariates (or poststratification) is frequently used in the analysis of randomized clinical trials. The purpose of such analysis is mainly to eliminate some residual bias resulting from any imbalance between treatment groups for some important covariates. Usually, covariate effect is modeled with the data at hand. In this paper, we present a new method of poststratification ("constrained poststratification") which consists of estimating the prognostic significance of covariates in a large historical data base, transferring the model's coefficients into the (smaller) randomized trial data set, and estimating treatment effects conditional on this a priori information. In a simulated experiment, constrained poststratification allowed not only reduction of the bias but also enhancement of the efficiency of the estimation of treatment effect.  相似文献   

2.
A statistical study is presented quantifying the effects of covariates such as gender, age, expression, image resolution and focus on three face recognition algorithms. Specifically, a Generalized Linear Mixed Effect model is used to relate probability of verification to subject and image covariates. The data and algorithms are selected from the Face Recognition Grand Challenge and the results show that the effects of covariates are strong and algorithm specific. The paper presents in detail all of the significant effects including interactions among covariates.One significant conclusion is that covariates matter. The variation in verification rates as a function of covariates is greater than the difference in average performance between the two best algorithms. Another is that few or no universal effects emerge; almost no covariates effect all algorithms in the same way and to the same degree. To highlight one specific effect, there is evidence that verification systems should enroll subjects with smiling rather than neutral expressions for best performance.  相似文献   

3.
The object-oriented philosophy abstracts out the essential immutable qualities of the components of the finite element method into classes of objects. It facilitates easy modification capability without creating a ripple effect on the whole program and reduces the necessity of knowledge of the source code to a minimum level. This paper presents a prototype implementation of the development of a flexible computational platform based on an object-oriented approach and its application in a FORTRAN environment, which facilitates use of previously developed components. IDARC3D which is capable of linear and nonlinear, static as well as dynamic analyses of multistory three-dimensional buildings with, or without, energy dissipation devices developed in FORTRAN with conventional programming practices has been transformed to the object-oriented framework. By using the new object-oriented platform, some representative examples were analysed and the results were compared with the well-known software facilities.  相似文献   

4.
Mixed model-based estimation of additive or geoadditive regression models has become popular throughout recent years. It provides a unified and modular framework that facilitates joint estimation of nonparametric covariate effects and the corresponding smoothing parameters. Therefore, extensions of mixed model-based inference to a Cox-type regression model for the hazard rate are considered, allowing for a combination of general censoring schemes for the survival times and a flexible, geoadditive predictor. In particular, the proposed methodology allows for arbitrary combinations of right, left, and interval censoring as well as left truncation. The geoadditive predictor comprises time-varying effects, nonparametric effects of continuous covariates, spatial effects, and potentially a number of extensions such as cluster-specific frailties or interaction surfaces. In addition, all covariates are allowed to be piecewise constant time-varying. Nonlinear and time-varying effects as well as the baseline hazard rate are modeled by penalized splines. Spatial effects can be included based on either Markov random fields or stationary Gaussian random fields. Estimation is based on a reparametrization of the model as a variance component mixed model. The variance parameters, corresponding to inverse smoothing parameters, can then be determined using an approximate marginal likelihood approach. An analysis on childhood mortality in Nigeria serves as an application, where the interval censoring framework additionally allows to deal with the problem of heaped survival times. The effect of ignoring the impact of interval-censored observations is investigated in a simulation study.  相似文献   

5.
Missing data often occur in regression analysis. Imputation, weighting, direct likelihood, and Bayesian inference are typical approaches for missing data analysis. The focus is on missing covariate data, a common complication in the analysis of sample surveys and clinical trials. A key quantity when applying weighted estimators is the mean score contribution of observations with missing covariate(s), conditional on the observed covariates. This mean score can be estimated parametrically or nonparametrically by its empirical average using the complete case data in case of repeated values of the observed covariates, typically assuming categorical or categorized covariates. A nonparametric kernel based estimator is proposed for this mean score, allowing the full exploitation of the continuous nature of the covariates. The performance of the kernel based method is compared to that of a complete case analysis, inverse probability weighting, doubly robust estimators and multiple imputation, through simulations.  相似文献   

6.
Mapping multiple quantitative trait loci (QTL) is commonly viewed as a problem of model selection. Various model selection criteria have been proposed, primarily in the non-Bayesian framework. The deviance information criterion (DIC) is the most popular criterion for Bayesian model selection and model comparison but has not been applied to Bayesian multiple QTL mapping. A derivation of the DIC is presented for multiple interacting QTL models and calculation of the DIC is demonstrated using posterior samples generated by Markov chain Monte Carlo (MCMC) algorithms. The DIC measures posterior predictive error by penalizing the fit of a model (deviance) by its complexity, determined by the effective number of parameters. The effective number of parameters simultaneously accounts for the sample size, the cross design, the number and lengths of chromosomes, covariates, the number of QTL, the type of QTL effects, and QTL effect sizes. The DIC provides a computationally efficient way to perform sensitivity analysis and can be used to quantitatively evaluate if including environmental effects, gene-gene interactions, and/or gene-environment interactions in the prior specification is worth the extra parameterization. The DIC has been implemented in the freely available package R/qtlbim, which greatly facilitates the general usage of Bayesian methodology for genome-wide interacting QTL analysis.  相似文献   

7.
Fuzzy Filtering for Physiological Signal Analysis   总被引:1,自引:0,他引:1  
This study suggests the use of fuzzy-filtering algorithms to deal with the uncertainties associated to the analysis of physiological signals. The signal characteristics, for a given situation or physiological state, vary for an individual over time and also vary among the individuals with the same state. These random variations are due to the several factors related to the physiological behavior of individuals, which cannot be taken into account in the interpretation of signal characteristics. Our approach is to reduce the effect of random variations on the analysis of signal characteristics via filtering out randomness or uncertainty from the signal using a nonlinear fuzzy filter. A fuzzy-filtering algorithm, which is based on a modification of filtering algorithm of Kumar et al. [M. Kumar, N. Stoll, and R. Stoll, IEEE Trans. Fuzzy Syst., vol. 17, no. 1, pp. 150–166, Feb. 2009], is proposed for an improved performance. The method is illustrated by studying the effect of head-up tilting on the heart-rate signal of 40 healthy subjects.   相似文献   

8.
In survival analysis, Cox's regression model is often used to assess the effect of covariates on the time of failure. This semi-parametric model has been extended to the situation where more than one cause of failure is of interest. In this paper, two semi-parametric models for the analysis of competing risks with covariates in the presence of independent random censoring are considered. Particular attention is devoted to the comparison between the two models. A method using a measure derived from the generalized variance is proposed. This method is illustrated with an example in a cancer clinical trial. A FORTRAN program for the computer implementation of the method is also discussed.  相似文献   

9.
The humanID gait challenge problem: data sets, performance, and analysis   总被引:14,自引:0,他引:14  
Identification of people by analysis of gait patterns extracted from video has recently become a popular research problem. However, the conditions under which the problem is "solvable" are not understood or characterized. To provide a means for measuring progress and characterizing the properties of gait recognition, we introduce the humanlD gait challenge problem. The challenge problem consists of a baseline algorithm, a set of 12 experiments, and a large data set. The baseline algorithm estimates silhouettes by background subtraction and performs recognition by temporal correlation of silhouettes. The 12 experiments are of increasing difficulty, as measured by the baseline algorithm, and examine the effects of five covariates on performance. The covariates are: change in viewing angle, change in shoe type, change in walking surface, carrying or not carrying a briefcase, and elapsed time between sequences being compared. Identification rates for the 12 experiments range from 78 percent on the easiest experiment to 3 percent on the hardest. All five covariates had statistically significant effects on performance, with walking surface and time difference having the greatest impact. The data set consists of 1,870 sequences from 122 subjects spanning five covariates (1.2 gigabytes of data). This infrastructure supports further development of gait recognition algorithms and additional experiments to understand the strengths and weaknesses of new algorithms. The experimental results are presented, the more detailed is the possible meta-analysis and greater is the understanding. It is this potential from the adoption of this challenge problem that represents a radical departure from traditional computer vision research methodology.  相似文献   

10.
This paper presents a computer program for estimating transition probabilities between states in a stochastic model for an illness-death process which incorporates time-dependent covariates. Parameters are estimated by the method of maximum likelihood using the Newton-Raphson iterative procedure. The program provides the standard normal deviate statistics as well as the value of the maximum of the likelihood function which can be used on repeated applications to test hypotheses concerning coefficients associated with covariates. Although this program is demonstrated by using a model with two ‘illness’ states and two ‘death’ states, it is also suitable for analyzing data with models involving fewer states, such as the analysis of survival time with covariates assuming a proportional hazard model.  相似文献   

11.
12.
Twin Gaussian Processes for Structured Prediction   总被引:1,自引:0,他引:1  
We describe twin Gaussian processes (TGP), a generic structured prediction method that uses Gaussian process (GP) priors on both covariates and responses, both multivariate, and estimates outputs by minimizing the Kullback-Leibler divergence between two GP modeled as normal distributions over finite index sets of training and testing examples, emphasizing the goal that similar inputs should produce similar percepts and this should hold, on average, between their marginal distributions. TGP captures not only the interdependencies between covariates, as in a typical GP, but also those between responses, so correlations among both inputs and outputs are accounted for. TGP is exemplified, with promising results, for the reconstruction of 3d human poses from monocular and multicamera video sequences in the recently introduced HumanEva benchmark, where we achieve 5 cm error on average per 3d marker for models trained jointly, using data from multiple people and multiple activities. The method is fast and automatic: it requires no hand-crafting of the initial pose, camera calibration parameters, or the availability of a 3d body model associated with human subjects used for training or testing.  相似文献   

13.
We extend the conditional likelihood approach to the analysis of capture-recapture experiments for closed populations by nonparametrically modeling the relationship between capture probabilities and individual covariates using P-splines. The model allows nonparametric functions of multivariate continuous covariates as well as categorical covariates and time effects, greatly enhancing the techniques available to an analyst. To implement this approach in practice, we found it necessary to develop a robust modification of the Horvitz-Thompson estimator. The method is illustrated on several data sets and a small simulation study is conducted.  相似文献   

14.
Currently engineering analysis is regarded as an integrated part of design process and medial axis (MA) is often utilized. However, the generation of MA of complicated models is computation intensive since it is always generated from scratch even if a tiny modification is imposed. A novel local adaptation-based approach to generating the MA for efficient engineering analysis is proposed in this study. With this method, the MA of a resultant model constructed from two other models via a Boolean operation or parameter modification is generated by adapting the MAs of the operand models in a certain way, instead of regenerating the MA from scratch. First, several new properties of the MA which are the fundamental basis of the proposed method are investigated. Then, the boundaries that will vanish from or be added into the resultant model during the Boolean operation or parameter modification are found, and the region in which the MA segments (MASs) need to be regenerated is determined. Finally, the new MASs are generated for the region using an improved tracing method. The final MA of the resultant model is thus constructed by combining the newly generated MASs with the reserved MASs of the operated model(s). Some examples are given to illustrate the high computational efficiency of the proposed method for engineering analysis.  相似文献   

15.
A unique color space segmentation method is introduced. It is founded on features of human cognition, where 11 color categories are used in processing color. In two experiments, human subjects were asked to categorize color stimuli into these 11 color categories, which resulted in markers for a Color LookUp Table (CLUT). These CLUT markers are projected on two 2D projections of the HSI color space. By applying the newly developed Fast Exact Euclidean Distance (FEED) transform on the projections, a complete and efficient segmentation of color space is achieved. With that, a human-based color space segmentation is generated, which is invariant for intensity changes. Moreover, the efficiency of the procedure facilitates the generation of adaptable, application-centered, color quantization schemes. It is shown to work excellently for color analysis, texture analysis, and for Color-Based Image Retrieval purposes.  相似文献   

16.
The requirement of constant censoring parameter β in Koziol-Green (KG) model is too restrictive. When covariates are present, the conditional KG model (Veraverbekea and Cadarso-Suárez, 2000) which allows β to be dependent on the covariates is more realistic. In this paper, using sufficient dimension reduction methods, we provide a model-free diagnostic tool to test if β is a function of the covariates. Our method also allows us to conduct a model-free selection of the related covariates. A simulation study and a real data analysis are also included to illustrate our approach.  相似文献   

17.
We study forecasting applications where the response variable is heavily correlated with one or a small set of covariates which we term dominant predictors. Dominant predictors commonly occur in financial forecasting where future market prices are heavily influenced by current prices, and to a much lesser degree, by many other, more subtle factors such as weather or calendar effects. We hypothesize that dominating predictors may mask the influence of the subtle factors, reducing forecasting accuracy. Consequently, we argue that it is crucial to find means of accurately accounting for the effect of the subtle factors on the response variable. To achieve this we present a two-stage modeling methodology which postpones the introduction of dominating predictors into the model building process until all predictive value from the other covariates has been extracted. To confirm our hypothesis and to test the effectiveness of the two-stage approach, we conduct an empirical study related to forecasting the outcome of sports events, which are well known to exhibit dominating predictors. Our results confirm that especially complex, nonlinear models are vulnerable to the masking effect and benefit from the two-stage paradigm. Our findings have important implications for forecasters who operate in environments where the influence of some predictors on the variable being forecast exceeds those of other covariates by a wide margin and we demonstrate appropriate ways to approach such forecasting tasks.  相似文献   

18.
In this article, we present a simple PDMS surface modification method based on poly(vinyl alcohol)/glycerol (PVA/Gly) solution immersion, self-assembled absorption, and heat treatment. The results of contact angle and ATR-FTIR demonstrate the superhydrophilic surface in modified PDMS. It can allow for the stable production of monodisperse droplet in a highly reproducible manner. In addition, we demonstrate the fabrication of monodisperse paclitaxel (PTX) loaded poly(l-lactic acid) (PLLA) microspheres on this kind of modification chip with solvent evaporation. The PLLA microspheres can be adjusted to a range of different sizes depending on the system flow rate. Determination of microsphere size is carried out by optical microscopy and image analysis to reveal less than 4% variation in microsphere size. Compared with the results of published papers, the presented data demonstrate that PTX-loaded PLLA microspheres show good physical properties (spherical and discrete), high-drug loading, encapsulation efficiency, a small initial burst, and sustained-release behavior due to outstanding monodispersity. With the characteristic to prepare high-quality, monodisperse, biodegradable microspheres, the versatile and simple microfluidic method facilitates the development of more reliable and reproducible drug delivery systems, which have great potential to benefit pharmaceutical and biological applications.  相似文献   

19.
In this paper, we introduce the notion of the central mean subspace when the response is multivariate, and propose a profile least squares approach to perform estimation and inference. Unlike existing methods in the sufficient dimension reduction literature, the profile least squares method does not require any distributional assumptions on the covariates, and facilitates statistical inference on the central mean subspace. We demonstrate theoretically and empirically that the properly weighted profile least squares approach is more efficient than its unweighted counterpart. We further confirm the promising finite-sample performance of our proposal through comprehensive simulations and an application to an etiologic study on essential hypertension conducted in P. R. China.  相似文献   

20.
An approach for designing optimal repetitive structures under arbitrary static loading is presented. It is shown that the analysis of such infinite structures can be reduced to the analysis of the repeating module under transformed loading and boundary conditions. Consequently, both the design parameters and the analysis variables constitute a relatively small set which facilitates the optimization process. The approach hinges on the representative cell method. It is based on formulating the analysis equations and the continuity conditions for a sequence of typical modules. Then, by means of the discrete Fourier transform this problem translates into a boundary value problem of a representative cell in transformed variables, which can be solved by any appropriate analytical or numerical method. The real structural response any-where in the structure is then obtained by the inverse transform. The sensitivities can also be calculated on the basis of the sensitivities of the representative cell. The method is illustrated by the design for minimum compliance with a volume constraint of an infinite plane truss. It is shown that by employing this analysis method within an optimal design scheme one can incorporate a reduced analysis problem in an intrinsically small design space.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号