首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   20篇
  免费   0篇
无线电   3篇
一般工业技术   6篇
自动化技术   11篇
  2019年   1篇
  2013年   1篇
  2012年   4篇
  2011年   3篇
  2009年   2篇
  2008年   2篇
  2007年   1篇
  2006年   1篇
  2005年   1篇
  1998年   1篇
  1993年   2篇
  1986年   1篇
排序方式: 共有20条查询结果,搜索用时 31 毫秒
1.
In this paper, we proposed a new two-parameter lifetime distribution with increasing failure rate, the complementary exponential geometric distribution, which is complementary to the exponential geometric model proposed by Adamidis and Loukas (1998). The new distribution arises on a latent complementary risks scenario, in which the lifetime associated with a particular risk is not observable; rather, we observe only the maximum lifetime value among all risks. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulas for its reliability and failure rate functions, moments, including the mean and variance, variation coefficient, and modal value. The parameter estimation is based on the usual maximum likelihood approach. We report the results of a misspecification simulation study performed in order to assess the extent of misspecification errors when testing the exponential geometric distribution against our complementary one in the presence of different sample size and censoring percentage. The methodology is illustrated on four real datasets; we also make a comparison between both modeling approaches.  相似文献   
2.
This paper considers the estimation of Kendall's tau for bivariate data (X,Y) when only Y is subject to right-censoring. Although τ is estimable under weak regularity conditions, the estimators proposed by Brown et al. [1974. Nonparametric tests of independence for censored data, with applications to heart transplant studies. Reliability and Biometry, 327-354], Weier and Basu [1980. An investigation of Kendall's τ modified for censored data with applications. J. Statist. Plann. Inference 4, 381-390] and Oakes [1982. A concordance test for independence in the presence of censoring. Biometrics 38, 451-455], which are standard in this context, fail to be consistent when τ≠0 because they only use information from the marginal distributions. An exception is the renormalized estimator of Oakes [2006. On consistency of Kendall's tau under censoring. Technical Report, Department of Biostatistics and Computational Biology, University of Rochester, Rochester, NY], whose consistency has been established for all possible values of τ, but only in the context of the gamma frailty model. Wang and Wells [2000. Estimation of Kendall's tau under censoring. Statist. Sinica 10, 1199-1215] were the first to propose an estimator which accounts for joint information. Four more are developed here: the first three extend the methods of Brown et al. [1974. Nonparametric tests of independence for censored data, with applications to heart transplant studies. Reliability and Biometry, 327-354], Weier and Basu [1980, An investigation of Kendall's τ modified for censored data with applications. J. Statist. Plann. Inference 4, 381-390] and Oakes [1982, A concordance test for independence in the presence of censoring. Biometrics 38, 451-455] to account for information provided by X, while the fourth estimator inverts an estimation of Pr(Yi?y|Xi=xi,Yi>ci) to get an imputation of the value of Yi censored at Ci=ci. Following Lim [2006. Permutation procedures with censored data. Comput. Statist. Data Anal. 50, 332-345], a nonparametric estimator is also considered which averages the obtained from a large number of possible configurations of the observed data (X1,Z1),…,(Xn,Zn), where Zi=min(Yi,Ci). Simulations are presented which compare these various estimators of Kendall's tau. An illustration involving the well-known Stanford heart transplant data is also presented.  相似文献   
3.
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.  相似文献   
4.
定时截尾缺失数据下指数分布的统计推断   总被引:3,自引:0,他引:3  
试验数据缺失是产品寿命试验中经常遇到的情况,处理起来比较复杂。当寿命分布为指数分布时,给出了定时截尾寿命试验数据缺失场合下样本分布参数的点估计及区间估计的一种近似方法。通过大量的Monte-Cado数值模拟试验,在缺失数据数目不太大时,参数估计的精度还是令人满意的。并从理论上证明可利用枢轴量赢的分布对参数m作区间估计。  相似文献   
5.
Taguchi's robust design strategy, whose aim is to make processes and products insensitive to factors which are hard or impossible to control (termed noise factors), is an important paradigm for improving products and processes. We present an overview of the strategy and tactics for robust design and demonstrate its usefulness for reliability improvement. Two important components of robust design are a criterion for assessing the effect of the noise factors and experimentation according to specialized experimental plans. Recent criticism of Taguchi's criterion and his analysis of its estimates has led to an alternative approach of modelling the response directly. We give additional reasons for using this response-model approach in the context of reliability improvement. Using the model for the response, appropriate criteria for assessing the effect of the noise factors can then be evaluated. We consider an actual experiment and reanalyse its data to illustrate these ideas and method.  相似文献   
6.
We study the two-parameter maximum likelihood estimation (MLE) problem for the Weibull distribution with consideration of interval data. Without interval data, the problem can be solved easily by regular MLE methods because the restricted MLE of the scale parameter β for a given shape parameter α has an analytical form, thus α can be efficiently solved from its profile score function by traditional numerical methods. In the presence of interval data, however, the analytical form for the restricted MLE of β does not exist and directly applying regular MLE methods could be less efficient and effective. To improve efficiency and effectiveness in handling interval data in the MLE problem, a new approach is developed in this paper. The new approach combines the Weibull-to-exponential transformation technique and the equivalent failure and lifetime technique. The concept of equivalence is developed to estimate exponential failure rates from uncertain data including interval data. Since the definition of equivalent failures and lifetimes follows EM algorithms, convergence of failure rate estimation by applying equivalent failures and lifetimes is mathematically proved. The new approach is demonstrated and validated through two published examples, and its performance in different conditions is studied by Monte Carlo simulations. It indicates that the profile score function for α has only one maximum in most cases. Such good characteristic enables efficient search for the optimal value of α.  相似文献   
7.
Relatively recent research has illustrated the potential that tobit regression has in studying factors that affect vehicle accident rates (accidents per distance traveled) on specific roadway segments. Tobit regression has been used because accident rates on specific roadway segments are continuous data that are left-censored at zero (they are censored because accidents may not be observed on all roadway segments during the period over which data are collected). This censoring may arise from a number of sources, one of which being the possibility that less severe crashes may be under-reported and thus may be less likely to appear in crash databases. Traditional tobit-regression analyses have dealt with the overall accident rate (all crashes regardless of injury severity), so the issue of censoring by the severity of crashes has not been addressed. However, a tobit-regression approach that considers accident rates by injury-severity level, such as the rate of no-injury, possible injury and injury accidents per distance traveled (as opposed to all accidents regardless of injury-severity), can potentially provide new insights, and address the possibility that censoring may vary by crash-injury severity. Using five-year data from highways in Washington State, this paper estimates a multivariate tobit model of accident-injury-severity rates that addresses the possibility of differential censoring across injury-severity levels, while also accounting for the possible contemporaneous error correlation resulting from commonly shared unobserved characteristics across roadway segments. The empirical results show that the multivariate tobit model outperforms its univariate counterpart, is practically equivalent to the multivariate negative binomial model, and has the potential to provide a fuller understanding of the factors determining accident-injury-severity rates on specific roadway segments.  相似文献   
8.
Estimating the risk of relapse for breast cancer patients is necessary, since it affects the choice of treatment. This problem involves analysing data of times to relapse of patients and relating them to prognostic variables. Some of the times to relapse will usually be censored.We investigate various ways of using neural network models to extend traditional statistical models in this situation. Such models are better able to model both non-linear effects of prognostic factors and interactions between them, than linear logistic or Cox regression models. With the dataset used in our study, however, the prediction of the risk of relapse is not significantly improved when using a neural network model. Predicting the risk that a patient will relapse within three years, say, is possible from this data, but not when any relapse will happen.  相似文献   
9.
The EM algorithm is a powerful technique for determining the maximum likelihood estimates (MLEs) in the presence of binary data since the maximum likelihood estimators of the parameters cannot be expressed in a closed-form. In this paper, we consider one-shot devices that can be used only once and are destroyed after use, and so the actual observation is on the conditions rather than on the real lifetimes of the devices under test. Here, we develop the EM algorithm for such data under the exponential distribution for the lifetimes. Due to the advances in manufacturing design and technology, products have become highly reliable with long lifetimes. For this reason, accelerated life tests are performed to collect useful information on the parameters of the lifetime distribution. For such a test, the Bayesian approach with normal prior was proposed recently by Fan et al. (2009). Here, through a simulation study, we show that the EM algorithm and the mentioned Bayesian approach are both useful techniques for analyzing such binary data arising from one-shot device testing and then make a comparative study of their performance and show that, while the Bayesian approach is good for highly reliable products, the EM algorithm method is good for moderate and low reliability situations.  相似文献   
10.
Various machine learning techniques have been applied to different problems in survival analysis in the last decade. They were usually adapted to learning from censored survival data by using the information on observation time. This includes learning from parts of the data or interventions to the learning algorithms. Efficient models were established in various fields of clinical medicine and bioinformatics. In this paper, we propose a pre-processing method for adapting the censored survival data to be used with ordinary machine learning algorithms. This is done by pre-assigning censored instances a positive or negative outcome according to their features and observation time. The proposed procedure calculates the goodness of fit of each censored instance to both the distribution of positives and the spoiled distribution of negatives in the entire dataset and relabels that instance accordingly. We performed a thorough empirical testing of our method in a simulation study and on two real-world medical datasets, using the naive Bayes classifier and decision trees. When compared to one of the popular ML methods dealing with survival, our method provided good results, especially when applied to heavily censored data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号