首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In many phase III clinical trials, particularly in the field of cancer, the comparison of treatments is based on both length of survival and quality of life. Subjects are followed over time until death and during this period, quality of life is assessed on a number of occasions. Simultaneous analysis of these two outcomes supplements the comparison of treatments in terms of each outcome independently with an assessment of the net effect. In addition, it provides a means of accounting for the informative dropout due to death of patients within the time frame of the quality of life study. The methods also have the potential to be extended to allow for informative dropout from the quality of life study prior to death. There are a number of broad approaches for the simultaneous analysis of quality of life and survival data. The most widely used approach in clinical research is quality-adjusted survival analysis, where treatments are compared in terms of a composite measure of quality and quantity of life. The paper reviews the different techniques for quality-adjusted survival analysis, illustrating the methodology by application to data from a phase III clinical trial in pancreatic cancer. In addition, alternative approaches using multistate survival analysis and joint modelling methods are also discussed.  相似文献   

2.
Competing risks data arise naturally in medical research, when subjects under study are at risk of more than one mutually exclusive event such as death from different causes. The competing risks framework also includes settings where different possible events are not mutually exclusive but the interest lies on the first occurring event. For example, in HIV studies where seropositive subjects are receiving highly active antiretroviral therapy (HAART), treatment interruption and switching to a new HAART regimen act as competing risks for the first major change in HAART. This article introduces competing risks data and critically reviews the widely used statistical methods for estimation and modelling of the basic (estimable) quantities of interest. We discuss the increasingly popular Fine and Gray model for subdistribution hazard of interest, which can be readily fitted using standard software under the assumption of administrative censoring. We present a simulation study, which explores the robustness of inference for the subdistribution hazard to the assumption of administrative censoring. This shows a range of scenarios within which the strictly incorrect assumption of administrative censoring has a relatively small effect on parameter estimates and confidence interval coverage. The methods are illustrated using data from HIV-1 seropositive patients from the collaborative multicentre study CASCADE (Concerted Action on SeroConversion to AIDS and Death in Europe).  相似文献   

3.
This paper reviews recent statistical advances in HIV/AIDS therapy trials. Our emphasis is on three emerging areas that address key challenges in AIDS research: the determination of optimal treatment sequences, estimating efficacy of intended treatment, and inference for repeated measures with dependent censoring. A common theme of these topics is the use of observational data within clinical trials to answer questions not addressed by the conventional intent-to-treat analysis. We also give a brief overview of some recent contributions to other topics relevant to AIDS clinical trials, including modelling of treatment compliance data, modelling of repeated measures, and group sequential testing.  相似文献   

4.
Abstract. We study an at‐most‐one‐change time‐series model with an abrupt change in the mean and dependent errors that fulfil certain mixing conditions. We obtain confidence intervals for the unknown change‐point via bootstrapping methods. Precisely, we use a block bootstrap of the estimated centred error sequence. Then, we reconstruct a sequence with a change in the mean using the same estimators as before. The difference between the change‐point estimator of the resampled sequence and the one of the original sequence can be used as an approximation of the difference between the real change‐point and its estimator. This enables us to construct confidence intervals using the empirical distribution of the resampled time series. A simulation study shows that the resampled confidence intervals are usually closer to their target levels and at the same time smaller than the asymptotic intervals.  相似文献   

5.
The paper considers the problem of sequential estimation of the mean survival time using randomly right censored data when the loss is measured by the sum of the squared error of estimation and the cost of observations made with per unit cost c being constant. The sequential estimator defined here is shown to be asymptotically risk efficient and asymptotically normal as c 4 ↓ 0 under certain regularity conditions. These conditions are shown to be satisfied by a wide variety of survival distributions and censoring distributions. Furthermore, they hold trivially when the mean survival experience over a finite interval [0,T], T < ∞, is of interest.  相似文献   

6.
The strength of the randomized trial to yield conclusions not dependent on assumptions applies only in an ideal setting. In the real world various complications such as loss-to-follow-up, missing outcomes, noncompliance and nonrandom selection into a trial force a reliance on assumptions. To handle real world complications, it is desirable to make as few and as reasonable assumptions as possible. This article reviews four techniques for using a few reasonable assumptions to design or analyse randomized trials in the presence of specific real world complications: 1) a double sampling design for survival data to avoid strong assumptions about informative censoring, 2) sensitivity analysis for partially missing binary outcomes that uses the randomization to reduce the number of parameters specified by the investigator, 3) an estimate of the effect of treatment received in the presence of all-or-none compliance that requires reasonable assumptions, and 4) statistics for binary outcomes that avoid some assumptions for generalizing results to a target population.  相似文献   

7.
With growing interest in oleaginous yeast as producers of future fuels and bulk chemicals, a robust, high‐throughput method for estimating lipid production is required. Although the lipophilic dye Nile red is frequently used to assay large samples of yeast and microalgae, inconsistent stain permeability between species and strains limits its effectiveness for some microorganisms. In this study, the oleaginous yeast Metschnikowia pulcherrima is used to develop a fluorescence‐free, cell‐size‐based image analysis method for estimating lipid production, which is then compared with an optimized Nile red method across several experimental scenarios. Cell size analysis (CSA) outperforms Nile red in all scenarios, correlating well with lipid extraction data when screening multiple strains, screening a subset of strains grown in different conditions, and tracking the lipid accumulation of a culture over time. Stain permeability is shown to vary significantly among the strains trialled, with lipid droplet size and cell wall thickness having a deleterious effect in the permeability of high‐lipid‐accumulating cells. CSA can also allow culture population dynamics to be monitored, providing key process information of cell size distribution in response to changing media compositions. Practical Applications: Nile red is currently the go‐to method for high‐ throughput lipid screening; however, staining inconsistencies in some organisms caused by varying cell morphology makes it challenging to optimize a robust protocol. Although fluorescence‐free methods exist (Raman spectroscopy, Fourier‐transform infrared spectroscopy, GCMS), the need for extensive sample preparation and specialist equipment restricts their widespread adoption. The CSA method presented here offers an accurate, robust, and cheap alternative for the study of microorganisms where fluorescence‐based avenues are not feasible. Furthermore, the population dynamics collected during CSA can easily be applied to bioreactor style processing, where tracking size distributions can provide real time information of culture status. This additional information is valuable even if fluorescence screening is a possibility.  相似文献   

8.
Hydrogen network design is an important step in hydrogen management of a petroleum refinery that manages the hydrogen distribution and consumption in a cost‐effective manner. While most works in this area have primarily focused on minimization of fresh hydrogen requirement and hydrogen purification aspects, very few works have dealt the issue of compression costs in hydrogen network designs. This work proposes a new mathematical model for synthesizing a hydrogen network with minimum compression costs. In contrast to the existing literature, this model uses stream‐dependent properties and realistic compressor cost correlations to determine the compression duty and costs, respectively. Tests on literature examples show that our model is flexible and gives reasonably favorable solutions than the previous models. Furthermore, the usefulness of understanding the trade‐offs between the number of compressors and compression duty and the importance of using stream‐dependent conditions in estimating compression costs are also highlighted in this work. © 2017 American Institute of Chemical Engineers AIChE J, 63: 3925–3943, 2017  相似文献   

9.
A total of 83 diets served over a period of 20 days to hospitalized diabetic patients were studied. The diets were modified in both calories and carbohydrates and were prepared in a centralized cooking facility. The diets studied were randomly selected, without replacement, using a random number table. Quantities of food served were determined using the direct weighing method. The nutritional value of the diets was determined by three indirect methods. The first, the detailed method, using energy and nutrient values of individual foods, and two abbreviated methods, I and II, based on the reference food values and food group mean values. Calories, proteins, carbohydrate, calcium, phosphorus, iron, retinol, and thiamine, riboflavin, niacin, and ascorbic acid contents were calculated for each diet. Furthermore, for each of the energy and nutrient calculations, the mean, standard deviation and variance were determined for all diets. A correlation and linear regression study was performed, to establish differences between the detailed and the abbreviated methods. Also, Student's "t" test of equality of means was used to identify differences in the calculation of the nutrient content of the diets. Significant differences among the values obtained by the three methods were found. In relation to the values obtained by the abbreviated methods, significant differences were found only for calcium and thiamine. In general, however, the diet calculation using the abbreviated methods gave similar results as those obtained by using the detailed method. Therefore, the use of the abbreviated methods at hospital level is considered convenient because they considerably reduce work, time and costs of diet planning and evaluation, making them easier. Nevertheless, their limitations should be taken into account. The preceding results document substantial problems in the use of the two abbreviated methods studied. The differences observed between the detailed and abbreviated methods in mean levels of most nutrients are unacceptably large, suggesting that the abbreviated methods suffer biases in estimating the nutrient content in the hospital diets studied. These problems are particularly important for the diabetic patients who composed the sample, in that dietary energy, fat, and carbohydrates were over-estimated in a consistent manner by both abbreviated methods used. Nonetheless, abbreviated methods, such as those used in the present study, have advantages which cannot be ignored: they are easy to use, reducing time requirement, and conceptual simplicity.(ABSTRACT TRUNCATED AT 400 WORDS)  相似文献   

10.
Abstract

Novel methods for process simulation and cost analysis have been applied during manufacturing process development of a rotor blade pitch horn. The aim is to reduce costs and lead time on the one hand and to enhance part quality on the other hand. Fabric draping has been simulated using the kinematic draping code PAM-QUIKFORM incorporating new processing strategies. Draping strategies were optimised using a genetic algorithm taking into account manufacturing constraints, which led to a fabric shear reduction by up to 10° with the optimised strategy implemented in manufacturing. A novel material generation of prebindered carbon fibre tows has been used to enhance rigidity and dimensional accuracy of the preform and to minimise processing time. State-of-the-art preforming technology has been incorporated in the process significantly increasing the degree of automation. The process had been analysed based on the activity based costs methodology deriving product costs as sum of costs of all activities involved. Development efforts have been concentrated based on the analysis in order to optimise cycle times with a nearly even duration of the subprocesses. In comparison to a manual prepreg manufacturing process, cost savings with the novel, semiautomated preforming process could be quantified to ~20%.  相似文献   

11.
Sen (1980) introduced a full sequential sampling technique for estimating the mean of an exponential distribution by taking into account the natural sequential ordering of data collection, recruitment costs of experimental units, and the costs of monitoring. Sen (1980) showed, among other things, that his time—sequential estimation procedure was asymptotically risk efficient. In this paper, second—order expansions of the average sample size and regret associated with Sen's (1980) methodology are obtained first. Then, it is shown how the idea of acceleration can fit in for curtailing sampling operations further. In the context of such accelerated time—sequential estimation schemes, similar second—order characteristics are reported.  相似文献   

12.
This paper describes a methodology for measuring rheological flow properties in-line, in real-time, based on simultaneous measurements of velocity profiles using an ultrasound velocity profiling (UVP) technique with pressure difference (PD) technology. The methodology allows measurements that are rapid, non-destructive and non-invasive and has several advantages over methods presented previously. The set-up used here allows direct access to demodulated echo amplitude data, thus providing an option to switch between time domain algorithms and algorithms based on FFT for estimating velocities, depending on the signal-to-noise ratio (SNR) and time resolution required. Software based on the MATLAB® graphical user interface (GUI) has been developed and provides a powerful and rapid tool for visualizing and processing the data acquired, giving rheological information in real-time and in excellent agreement with conventional methods. This paper further focuses on crucial aspects of the methodology: implementation of low-pass filter and singular value decomposition (SVD) methods, non-invasive measurements and determination of the wall positions using channel correlation and methods based on SVD. Measurements of sound velocity and attenuation of ultrasound in-line were introduced to increase measurement accuracy and provide an interesting approach to determine particle concentration in-line. The UVP-PD methodology presented may serve as an in-line tool for non-invasive, real-time monitoring and process control.  相似文献   

13.
Causality inference and root cause analysis are important for fault diagnosis in the chemical industry. Due to the increasing scale and complexity of chemical processes, data-driven methods become indispensable in causality inference. This paper proposes an approach based on the concept of transfer entropy which was presented by Schreiber in 2000 to generate a causal map. To get a better performance in estimating the time delay of causal relations, a modified form of the transfer entropy is presented in this paper. Case studies on two simulated chemical processes, including the benchmark Tennessee Eastman process are performed to illustrate the effectiveness of this approach.  相似文献   

14.
石油钻井是一项高风险性、耗资巨大的系统工程。为了智能预警石油钻井过程中的异常,缩短非生产时间,降低相关风险,提出一种基于移动窗稀疏主元分析法(MWSPCA)的案例推理(CBR)异常智能预警方法(MWSPCA-CBR)。首先利用MWSPCA算法分析钻井过程中的实时数据,快速定位出异常可能发生的时间,然后使用基于案例推理方法分析异常数据,确定可能的异常类型,并为实时监控专家提供相关异常的处理方法。所提方法应用到石油钻井过程异常预警中,实验结果验证了所提方法的可行性和有效性,为钻井过程降低风险成本提供了新思路。  相似文献   

15.
基于热传导反问题的各向异性材料热物性预测方法   总被引:1,自引:1,他引:0       下载免费PDF全文
杨晨  高思云 《化工学报》2007,58(6):1378-1384
采用不同方法对基于热传导反问题的固体热导率预测进行了研究。分别采用采用不同方法对基于热传导反问题的固体热导率预测进行了研究。分别采用Bayesian统计方法、Levenberg-Marquardt和遗传算法对二维各向异性材料的热物性进行了预测;并进行了分析比较。研究结果表明;Bayesian方法中热传导反问题的解是其后验概率密度的数学期望;而后验概率密度函数(PPDF)通过测定的温度进行计算获得;用Markov chain Monte Carlo算法计算后验状态空间以得到未知热导率的统计估计;采用Metropolis-Hasting算法进行数据采样构造Markov chain;并截取收敛后的样本进行分析。遗传算法是一种相对较新的用于最优化估计的方法;也可以用于求解反问题。  相似文献   

16.
An optimally designed of batch‐storage network that uses a periodic square wave model provides analytical lot‐sizing equations for a complex supply chain network characterized as a multisupplier, multiproduct, multi‐stage, nonserial, multicustomer, cyclic system, with recycling or remanufacturing. The network structure includes multiple currency flows and material flows. The processes involve multiple feedstock and product materials with fixed compositions that are highly suitable for production processes. Transportation processes that carry multiple materials of unknown composition are included in this study, and the time frame is varied from a single infinite period to multiple finite periods to accommodate nonperiodic parameter variations. The objective function in the optimization is chosen to minimize the difference between the opportunity costs of currency/material inventories and stockholder benefits given in the numeraire currency. Expressions for the Kuhn‐Tucker conditions for the optimization problem are reduced to a multiperiod subproblem describing the average flow rates and the analytical lot‐sizing equations. The multiperiod lot‐sizing equations are shown to differ from their single‐period counterparts. The multiperiod subproblem yields a multiperiod planning model that has many advantages over existing planning models. For example, it contains terms that represent operation frequency dependent costs. Realistically sized numerical examples that deal with multinational corporations are formulated and tested. The effects of corporate income taxes, interest rates, and exchange rates are presented. © 2011 American Institute of Chemical Engineers AIChE J, 2011  相似文献   

17.
The aim of this paper is to review some key techniques of Bayesian methods of sample size determination. The approach is to cover a small number of simple problems, such as estimating the mean of a normal distribution. The methods considered are in two groups: inferential and decision theoretic. In the inferential Bayesian methods of sample size determination, we are solely concerned with the inference about the parameter(s) of interest. The fully Bayesian or decision theoretic approach treats the problem as a decision problem and employs a loss or utility function.  相似文献   

18.
19.
If a physical component or a device, employed in a process fails at a certain time instant T+, but due to an event unrelated to the evolution of the process, or its mechanism, the observed survival time is called censored. Since the true survival time T might have been found to be any time instant T > T+, had the process-independent event not occurred, analysis is different from purely uncensored-data analysis. The subject matter is numerically treated for an electrochemical system by estimating its survival function, and by two-sample observation sets of cathode failure in comparing tank-flow to tubular-flow electrolyzer performance.  相似文献   

20.
FIRST-ORDER INTEGER-VALUED AUTOREGRESSIVE (INAR(1)) PROCESS   总被引:1,自引:0,他引:1  
Abstract. A simple model for a stationary sequence of integer-valued random variables with lag-one dependence is given and is referred to as the integer-valued autoregressive of order one (INAR(1)) process. The model is suitable for counting processes in which an element of the process at time t can be either the survival of an element of the process at time t - 1 or the outcome of an innovation process. The correlation structure and the distributional properties of the INAR(1) model are similar to those of the continuous-valued AR(1) process. Several methods for estimating the parameters of the model are discussed, and the results of a simulation study for these estimation methods are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号