排序方式: 共有30条查询结果,搜索用时 15 毫秒
1.
Edward C. Malthouse 《Data mining and knowledge discovery》2007,15(3):383-402
This paper discusses a new application of data mining, quantifying the importance of responding to trigger events with reactive
contacts. Trigger events happen during a customer’s lifecycle and indicate some change in the relationship with the company.
If detected early, the company can respond to the problem and retain the customer; otherwise the customer may switch to another
company. It is usually easy to identify many potential trigger events. What is needed is a way of prioritizing which events
demand interventions. We conceptualize the trigger event problem and show how survival analysis can be used to quantify the
importance of addressing various trigger events. The method is illustrated on four real data sets from different industries
and countries. 相似文献
2.
机床的故障率取决于固有可靠性,运行环境影响因素虽然不会改变其故障规律,但是一定程度上会推进或者延迟故障发生的周期,因此在进行机床备件预测时考虑运行环境影响要素显得尤为重要。通过对某型号机床主轴运行环境分析,筛选主要影响协变量,基于数据独立同分布的假设检验,采用PHM回归模型建立圆柱滚子轴承可靠性模型。基于此,利用更新模型估计故障发生点,参考备件保有率,建立主轴备件的需求预测模型。计算结果表明:考虑环境要素进行备件预测更为精确。该研究成果对机床不可修复备件预测在理论研究方面具有参考价值。 相似文献
3.
Thomas Kneib 《Computational statistics & data analysis》2006,51(2):777-792
Mixed model-based estimation of additive or geoadditive regression models has become popular throughout recent years. It provides a unified and modular framework that facilitates joint estimation of nonparametric covariate effects and the corresponding smoothing parameters. Therefore, extensions of mixed model-based inference to a Cox-type regression model for the hazard rate are considered, allowing for a combination of general censoring schemes for the survival times and a flexible, geoadditive predictor. In particular, the proposed methodology allows for arbitrary combinations of right, left, and interval censoring as well as left truncation. The geoadditive predictor comprises time-varying effects, nonparametric effects of continuous covariates, spatial effects, and potentially a number of extensions such as cluster-specific frailties or interaction surfaces. In addition, all covariates are allowed to be piecewise constant time-varying. Nonlinear and time-varying effects as well as the baseline hazard rate are modeled by penalized splines. Spatial effects can be included based on either Markov random fields or stationary Gaussian random fields. Estimation is based on a reparametrization of the model as a variance component mixed model. The variance parameters, corresponding to inverse smoothing parameters, can then be determined using an approximate marginal likelihood approach. An analysis on childhood mortality in Nigeria serves as an application, where the interval censoring framework additionally allows to deal with the problem of heaped survival times. The effect of ignoring the impact of interval-censored observations is investigated in a simulation study. 相似文献
4.
J. Ross Beveridge Geof H. Givens P. Jonathon Phillips Bruce A. Draper David S. Bolme Yui Man Lui 《Image and vision computing》2010
A study is presented showing how three state-of-the-art algorithms from the Face Recognition Vendor Test 2006 (FRVT 2006) are effected by factors related to face images and people. The recognition scenario compares highly controlled images to images taken of people as they stand before a camera in settings such as hallways and outdoors in front of buildings. A Generalized Linear Mixed Model (GLMM) is used to estimate the probability an algorithm successfully verifies a person conditioned upon the factors included in the study. The factors associated with people are: Gender, Race, Age and whether they wear Glasses. The factors associated with images are: the size of the face, edge density and region density. The setting, indoors versus outdoors, is also a factor. Edge density can change the estimated probability of verification dramatically, for example from about 0.15 to 0.85. However, this effect is not consistent across algorithm or setting. This finding shows that simple measurable factors are capable of characterizing face quality; however, these factors typically interact with both algorithm and setting. 相似文献
5.
Reliability Meets Big Data: Opportunities and Challenges 总被引:1,自引:0,他引:1
ABSTRACT Reliability field data such as that obtained from warranty claims and maintenance records have been used traditionally for such purposes as generating predictions for warranty costs and optimizing the cost of system operation and maintenance. In the current (and future) generation of many products, the nature of field reliability data is changing dramatically. In particular, products can be outfitted with sensors that can be used to capture information about how and when and under what environmental and operating conditions products are being used. Today some of that information is being used to monitor system health and interest is building to develop prognostic information systems. There are, however, many other potential applications for using such data. In this article we review some applications where field reliability data are used and explore some of the opportunities to use modern reliability data to provide stronger statistical methods to operate and predict the performance of systems in the field. We also provide some examples of recent technical developments designed to be used in such applications and outline remaining challenges. 相似文献
6.
Christiaan Heij Patrick J.F. Groenen Dick van Dijk 《Computational statistics & data analysis》2007,51(7):3612-3625
Forecasting with many predictors is of interest, for instance, in macroeconomics and finance. The forecast accuracy of two methods for dealing with many predictors is compared, that is, principal component regression (PCR) and principal covariate regression (PCovR). Simulation experiments with data generated by factor models and regression models indicate that, in general, PCR performs better for the first type of data and PCovR performs better for the second type of data. An empirical application to four key US macroeconomic variables shows that PCovR achieves improved forecast accuracy in some situations. 相似文献
7.
Reliability of wind turbines modeled by a Poisson process with covariates,unobserved heterogeneity and seasonality
下载免费PDF全文
![点击此处可从《风能》网站下载免费的PDF全文](/ch/ext_images/free.gif)
Reliability of wind turbines is analyzed with the use of an easily interpretable mathematical model based on a Poisson process, which takes into account jointly observable differences between turbines described by covariates (type of turbine, size of turbine, harshness of environment, installation date and seasonal effects) as well as unobservable differences modeled by a standard frailty approach known from survival analysis. The introduced model is applied to failure data from the WMEP database, and the fit of the model is checked. The paper demonstrates the usefulness of the model for determination of critical factors of wind turbine reliability, with potential for prediction for future installations. In particular, the model's ability to take into account unobserved heterogeneity is demonstrated. The model can easily be adapted for use with different datasets or for analysis of other repairable systems than wind turbines. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
8.
9.
Dimitris Rizopoulos 《Computational statistics & data analysis》2012,56(3):491-501
Joint models for longitudinal and time-to-event data have recently attracted a lot of attention in statistics and biostatistics. Even though these models enjoy a wide range of applications in many different statistical fields, they have not yet found their rightful place in the toolbox of modern applied statisticians mainly due to the fact that they are rather computationally intensive to fit. The main difficulty arises from the requirement for numerical integration with respect to the random effects. This integration is typically performed using Gaussian quadrature rules whose computational complexity increases exponentially with the dimension of the random-effects vector. A solution to overcome this problem is proposed using a pseudo-adaptive Gauss-Hermite quadrature rule. The idea behind this rule is to use information for the shape of the integrand by separately fitting a mixed model for the longitudinal outcome. Simulation studies show that the pseudo-adaptive rule performs excellently in practice, and is considerably faster than the standard Gauss-Hermite rule. 相似文献
10.
Wagner Barreto-Souza Klaus L.P. Vasconcellos 《Computational statistics & data analysis》2011,55(3):1379-1393
In this paper we introduce a general extreme-value regression model and derive Cox and Snell’s (1968) general formulae for second-order biases of maximum likelihood estimates (MLEs) of the parameters. We obtain formulae which can be computed by means of weighted linear regressions. Furthermore, we give the skewness of order n−1/2 of the maximum likelihood estimators of the parameters by using Bowman and Shenton’s (1988) formula. A simulation study with results obtained with the use of Cox and Snell’s (1968) formulae is discussed. Practical uses of this model and of the derived formulae for bias correction are also presented. 相似文献