首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Mixture cure models (MCMs) have been widely used to analyze survival data with a cure fraction. The MCMs postulate that a fraction of the patients are cured from the disease and that the failure time for the uncured patients follows a proper survival distribution, referred to as latency distribution. The MCMs have been extended to bivariate survival data by modeling the marginal distributions. In this paper, the marginal MCM is extended to multivariate survival data. The new model is applicable to the survival data with varied cluster size and interval censoring. The proposed model allows covariates to be incorporated into both the cure fraction and the latency distribution for the uncured patients. The primary interest is to estimate the marginal parameters in the mean structure, where the correlation structure is treated as nuisance parameters. The marginal parameters are estimated consistently by treating the observations within the cluster as independent. The variances of the parameters are estimated by the one-step jackknife method. The proposed method does not depend on the specification of correlation structure. Simulation studies show that the new method works well when the marginal model is correct. The performance of the MCM is also examined when the clustered survival times share common random effect. The MCM is applied to the data from a smoking cessation study.  相似文献   

2.
Sample size determination is essential to planning clinical trials. Jung (2008) established a sample size calculation formula for paired right-censored data based on the logrank test, which has been well-studied for comparing independent survival outcomes. An alternative to rank-based methods for independent right-censored data, advocated by Pepe and Fleming (1989), tests for differences between integrated weighted Kaplan–Meier estimates and is more sensitive to the magnitude of difference in survival times between groups. In this paper, we employ the concept of the Pepe–Fleming method to determine an adequate sample size by calculating differences between Kaplan–Meier estimators considering pair-wise correlation. We specify a positive stable frailty model for the joint distribution of paired survival times. We evaluate the performance of the proposed method by simulation studies and investigate the impacts of the accrual times, follow-up times, loss to follow-up rate, and sensitivity of power under misspecification of the model. The results show that ignoring the pair-wise correlation results in overestimating the required sample size. Furthermore, the proposed method is applied to two real-world studies, and the R code for sample size calculation is made available to users.  相似文献   

3.
The generalized linear mixed model (GLIMMIX) provides a powerful technique to model correlated outcomes with different types of distributions. The model can now be easily implemented with SAS PROC GLIMMIX in version 9.1. For binary outcomes, linearization methods of penalized quasi-likelihood (PQL) or marginal quasi-likelihood (MQL) provide relatively accurate variance estimates for fixed effects. Using GLIMMIX based on these linearization methods, we derived formulas for power and sample size calculations for longitudinal designs with attrition over time. We found that the power and sample size estimates depend on the within-subject correlation and the size of random effects. In this article, we present tables of minimum sample sizes commonly used to test hypotheses for longitudinal studies. A simulation study was used to compare the results. We also provide a Web link to the SAS macro that we developed to compute power and sample sizes for correlated binary outcomes.  相似文献   

4.
We developed a sample size estimation program (SSEP) with which medical researchers can easily estimate the appropriate sample size for a specific significance level and statistical power using their favorite WWW browsers. SSEP can estimate the sample sizes for six statistical methods by Monte-Carlo simulation: Student's t-test, Welch's t-test, Analysis of variance, Wilcoxon's rank sum test, Kruskal-Wallis test, and the Cochran-Armitage test for linear trends. The SSEP simulation programs were created using the SAS software macro language. Medical researchers can interactively use this program and determine reliable sample sizes when planning new prospective clinical studies and animal experiments.  相似文献   

5.
Pandarum R  Yu W  Hunter L 《Ergonomics》2011,54(9):866-875
Exploratory retail studies in South Africa indicate that plus-sized women experience problems and dissatisfaction with poorly fitting bras. The lack of 3-D anthropometric studies for the plus-size women's bra market initiated this research. 3-D body torso measurements were collected from a convenience sample of 176 plus-sized women in South Africa. 3-D breast measurements extracted from the TC(2) NX12-3-D body scanner 'breast module' software were compared with traditional tape measurements. Regression equations show that the two methods of measurement were highly correlated although, on average, the bra cup size determining factor 'bust minus underbust' obtained from the 3-D method is approximately 11% smaller than that of the manual method. It was concluded that the total bust volume correlated with the quadrant volume (r = 0.81), cup length, bust length and bust prominence, should be selected as the overall measure of bust size and not the traditional bust girth and the underbust measurement. STATEMENT OF RELEVANCE: This study contributes new data and adds to the knowledge base of anthropometry and consumer ergonomics on bra fit and support, published in this, the Ergonomics Journal, by Chen et al. (2010) on bra fit and White et al. (2009) on breast support during overground running.  相似文献   

6.
Interval goal programming (IGP) with a marginal penalty function (PF) was first proposed by Charnes and Collomb in 1972, and further improved by Kvanli and other researchers. Recently, Lu and Chen proposed an efficient logarithmic method to formulate IGP with an S‐shaped PF. However, their method requires adding many binary variables when the problem size becomes large, which increases the computational burden in the solution process. This study proposes an efficient approach for the S‐shaped PF. The arbitrary PF frequently appears in the fields of business and industry. However, none of the previous approaches have addressed arbitrary PFs without adding binary variables. The proposed approach can be easily extended to formulate an arbitrary PF in which binary variables are no longer required, regardless of the number of break points. The proposed method can improve the efficiency of IGP for solving large size management and decision problems in considering PFs. In order to demonstrate the correctness, usefulness of the proposed model, illustrative examples are provided.  相似文献   

7.
A spreadsheet program is presented as a design tool for work sampling studies. The program incorporates the alternating Poisson process (APP) model for process fluctuation and allows for evaluation of process parameters with, or without, a finite sample size correction.  相似文献   

8.
For a complete sample, Chen [Chen, Z., 1996. Joint confidence region for the parameters of a Pareto distribution. Metrika 44, 191-197] proposed an interval estimation of the parameter θ and a joint confidence region of two parameters of a Pareto distribution. When the first r lifetimes and the last s lifetimes out of n inspected items are missing, doubly type II censoring has arisen. Since Chen’s method cannot be extended to the doubly type II censored sample case, I proposed another joint confidence region for the two parameters of a Pareto distribution. The interval estimation of parameter ν is also given for a doubly type II censored sample. Since the complete sample case (r=0) and the right type II censored sample case (r=s=0) are special cases of doubly type II censored samples, the proposed confidence region should also be appropriate for these two special cases, and thus can be compared with Chen’s method based on the area of the confidence region. From the simulation results, it can be found that the proposed method is better than Chen’s method in obtaining a smaller confidence area. But the difference in area of the two methods becomes very slight when the sample size becomes larger. In this paper, I also proposed the prediction intervals of the future observation and the ratio of the two future consecutive failure times based on the doubly type II censored sample. Finally, an example is given to illustrate the proposed method.  相似文献   

9.
To estimate nullvalues in relational database systems is an important research topic. In Chen and Yeh (1997) a method for estimating null values in relational database systems was presented. In Chen and Chen (1997) a method for fuzzy query translation for information in the distributed relational databases environment was presented. In this article, the works of Chen and Chen (1997) and Chen and Yeh (1997) are extended to propose a method for estimating null values in the distributed relational databases environment. The proposed method provides a useful way to estimate incomplete data when the relations stored in a failed server cannot be accessed in the distributed relational databases environment.  相似文献   

10.
《Ergonomics》2012,55(9):866-875
Exploratory retail studies in South Africa indicate that plus-sized women experience problems and dissatisfaction with poorly fitting bras. The lack of 3-D anthropometric studies for the plus-size women's bra market initiated this research. 3-D body torso measurements were collected from a convenience sample of 176 plus-sized women in South Africa. 3-D breast measurements extracted from the TC2 NX12–3-D body scanner ‘breast module’ software were compared with traditional tape measurements. Regression equations show that the two methods of measurement were highly correlated although, on average, the bra cup size determining factor ‘bust minus underbust’ obtained from the 3-D method is approximately 11% smaller than that of the manual method. It was concluded that the total bust volume correlated with the quadrant volume (r = 0.81), cup length, bust length and bust prominence, should be selected as the overall measure of bust size and not the traditional bust girth and the underbust measurement.

Statement of Relevance: This study contributes new data and adds to the knowledge base of anthropometry and consumer ergonomics on bra fit and support, published in this, the Ergonomics Journal, by Chen et al. (2010) on bra fit and White et al. (2009) on breast support during overground running.  相似文献   

11.
Methods for analyzing clustered survival data are gaining popularity in biomedical research. Naive attempts to fitting marginal models to such data may lead to biased estimators and misleading inference when the size of a cluster is statistically correlated with some cluster specific latent factors or one or more cluster level covariates. A simple adjustment to correct for potentially informative cluster size is achieved through inverse cluster size reweighting. We give a methodology that incorporates this technique in fitting an accelerated failure time marginal model to clustered survival data. Furthermore, right censoring is handled by inverse probability of censoring reweighting through the use of a flexible model for the censoring hazard. The resulting methodology is examined through a thorough simulation study. Also an illustrative example using a real dataset is provided that examines the effects of age at enrollment and smoking on tooth survival.  相似文献   

12.
This paper develops a supervised discriminant technique, called graph embedding discriminant analysis (GEDA), for dimensionality reduction of high-dimensional data in small sample size problems. GEDA can be seen as a linear approximation of a multimanifold-based learning framework in which nonlocal property is taken into account besides the marginal property and local property. GEDA seeks to find a set of perfect projections that not only can impact the samples of intraclass and maximize the margin of interclass, but also can maximize the nonlocal scatter at the same time. This characteristic makes GEDA more intuitive and more powerful than linear discriminant analysis (LDA) and marginal fisher analysis (MFA). The proposed method is applied to face recognition and is examined on the Yale, ORL and AR face image databases. The experimental results show that GEDA consistently outperforms LDA and MFA when the training sample size per class is small.  相似文献   

13.
Small sample properties of the maximum partial likelihood estimates for Cox's proportional hazards model depend on the sample size, the true values of regression coefficients, covariate structure, censoring pattern and possibly baseline hazard functions. Therefore, it would be difficult to construct a formula or table to calculate the exact power of a statistical test for the treatment effect in any specific clinical trial. The simulation program, written in SAS/IML, described in this paper uses Monte-Carlo methods to provide estimates of the exact power for Cox's proportional hazards model. For illustrative purposes, the program was applied to real data obtained from a clinical trial performed in Japan. Since the program does not assume any specific function for the baseline hazard, it is, in principle, applicable to any censored survival data as long as they follow Cox's proportional hazards model.  相似文献   

14.
When measuring units are expensive or time consuming, while ranking them can be done easily, it is known that ranked set sampling (RSS) is preferred to simple random sampling (SRS). Available results for RSS are developed under specific parametric assumptions or are asymptotic in nature, with few results available for finite size samples when the underlying distribution of the observed data is unknown. We investigate the use of resampling techniques to draw inferences on population characteristics. To obtain standard error and confidence interval estimates we discuss and compare three methods of resampling a given ranked set sample. Chen et al. (2004. Ranked Set Sampling: Theory and Applications. Springer, New York) suggest a natural method to obtain bootstrap samples from each row of a RSS. We prove that this method is consistent for a location estimator. We propose two other methods that are designed to obtain more stratified resamples from the given sample. Algorithms are provided for these methods. We recommend a method that obtains a bootstrap RSS from the observations. We prove several properties of this method, including consistency for a location parameter. We define two types of L-estimators for RSS and obtain expressions for their exact moments. We discuss an application to obtain confidence intervals for the Winsorized mean of a RSS.  相似文献   

15.
Recently, in Expert Systems with Applications journal, Chen, Wu, Chiu, and Lee (2012) present an alternative optimization solution process to determine the optimal replenishment lot size with discontinuous issuing policy considering imperfect rework process and multiple shipments. Simultaneously, Chiu, Chiu, and Yang (2012) solve a multi-delivery policy into economic production quantity with partial rework. Both papers consider the number of shipments as a fixed and a given value. In this paper both the optimal replenishment lot size and the optimal number of shipments are derived jointly for the inventory models of Chen et al. (2012) and Chiu, Chiu, et al. (2012). Two easy to apply solution procedures are proposed. The solutions reported in this paper are better than the solutions of Chen et al., 2012, Chiu, Chiu, et al., 2012.  相似文献   

16.
When using microarray analysis to determine gene dependence, one of the goals is to identify differentially expressed genes. However, the inherent variations make analysis challenging. We propose a statistical method (SRA, swapped and regression analysis) especially for dye-swapped design and small sample size. Under general assumptions about the structure of the channels, scanner, and target effects from the experiment, we prove that SRA removes bias caused by these effects. We compare our method with ANOVA, using both simulated and real data. The results show that SRA has consistent sensitivity for the identification of differentially expressed genes in dye-swapped microarrays, particularly when the sample size is small. The program for the proposed method is available at http://www.ibms.sinica.edu.tw/∼csjfann/firstflow/program.htm.  相似文献   

17.
The asymptotic and exact conditional methods are widely used to compare two ordered multinomials. The asymptotic method is well known for its good performance when the sample size is sufficiently large. However, Brown et al. (2001) gave a contrary example in which this method performed liberally even when the sample size was large. In practice, when the sample size is moderate, the exact conditional method is a good alternative, but it is often criticised for its conservativeness. Exact unconditional methods are less conservative, but their computational burden usually renders them infeasible in practical applications. To address these issues, we develop an approximate unconditional method in this paper. Its computational burden is successfully alleviated by using an algorithm that is based on polynomial multiplication. Moreover, the proposed method not only corrects the conservativeness of the exact conditional method, but also produces a satisfactory type I error rate. We demonstrate the practicality and applicability of this proposed procedure with two real examples, and simulation studies are conducted to assess its performance. The results of these simulation studies suggest that the proposed procedure outperforms the existing procedures in terms of the type I error rate and power, and is a reliable and attractive method for comparing two ordered multinomials.  相似文献   

18.
卢桂馥  林忠  金忠 《计算机科学》2010,37(5):251-253
提出了一种基于最大差值的二维边界Fisher的鉴别分析方法。该方法利用描述类间数据可分性的相似度矩阵Sp与描述类内数据紧致性的相似度矩阵Sc之差作为鉴别准则,从而避免了边界Fisher鉴别分析所遇到的小样本问题。所提方法是直接基于图像矩阵的,与以往的基于图像向量的方法相比,进一步提高了识别的正确率。另外,还揭示了基于最大差值的边界Fisher鉴别方法和边界Fisher鉴别的内在关系。在ORL和Yale人脸数据库上的实验表明,所提方法具有较高的识别率。  相似文献   

19.
边界近邻零空间鉴别分析   总被引:1,自引:1,他引:0  
提出了一种边界近部零空间鉴别分析算法。算法首先定义了新的目标函数,通过对该目标函数的理论分析与证明指出首先用PCA将高维样本降维至一个低维子空间,而在此低维子空间该目标函数并不损失任何有效的鉴别信息;算法不但能有效地解决本问题,而且仅需通过3次特征值分解就可求出具有正交性的投影矩阵,从而有效地提高了算法的识别性能。最后也给出了该算法基于核映射的非线性拓展。人脸库上的实验结果证实了所提方法的有效性。  相似文献   

20.
Conventional approach for solving the replenishment lot size problem is by using differential calculus on the long-run average production cost function with the need to prove optimality first. Recent studies proposed an algebraic approach to the solution of classic economic order quantity (EOQ) and the economic production quantity (EPQ) models without reference to the use of derivatives. This paper extends it to the solution of a specific EPQ model as was examined by Chiu et al. [Chiu, S. W, Chen, K. -K, Lin, H. -D. Numerical method for determination of the optimal lot size for amanufacturing system with discontinuous issuing policy and rework. International Journal forNumerical Methods in Biomedical Engineering. doi: 10.1002/cnm.1369. (in Press; Published online March-10-2010).]. As a result, optimal replenishment lot size and a simplified optimal production-inventory cost formula for such a particular EPQ model can be derived without derivatives. This alternative approach may enable practitioners who with little knowledge of calculus to understand the realistic production systems with ease.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号