首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   28篇
  免费   2篇
化学工业   2篇
金属工艺   1篇
建筑科学   1篇
能源动力   1篇
轻工业   1篇
水利工程   1篇
无线电   1篇
一般工业技术   6篇
冶金工业   2篇
自动化技术   14篇
  2024年   1篇
  2023年   1篇
  2019年   1篇
  2018年   2篇
  2016年   1篇
  2015年   1篇
  2014年   1篇
  2013年   4篇
  2012年   2篇
  2011年   1篇
  2010年   2篇
  2008年   2篇
  2007年   4篇
  2006年   2篇
  2005年   1篇
  2004年   2篇
  1995年   1篇
  1985年   1篇
排序方式: 共有30条查询结果,搜索用时 15 毫秒
1.
This paper discusses a new application of data mining, quantifying the importance of responding to trigger events with reactive contacts. Trigger events happen during a customer’s lifecycle and indicate some change in the relationship with the company. If detected early, the company can respond to the problem and retain the customer; otherwise the customer may switch to another company. It is usually easy to identify many potential trigger events. What is needed is a way of prioritizing which events demand interventions. We conceptualize the trigger event problem and show how survival analysis can be used to quantify the importance of addressing various trigger events. The method is illustrated on four real data sets from different industries and countries.  相似文献   
2.
王晓燕  张景辉  张淳  徐成 《机床与液压》2019,47(17):220-224
机床的故障率取决于固有可靠性,运行环境影响因素虽然不会改变其故障规律,但是一定程度上会推进或者延迟故障发生的周期,因此在进行机床备件预测时考虑运行环境影响要素显得尤为重要。通过对某型号机床主轴运行环境分析,筛选主要影响协变量,基于数据独立同分布的假设检验,采用PHM回归模型建立圆柱滚子轴承可靠性模型。基于此,利用更新模型估计故障发生点,参考备件保有率,建立主轴备件的需求预测模型。计算结果表明:考虑环境要素进行备件预测更为精确。该研究成果对机床不可修复备件预测在理论研究方面具有参考价值。  相似文献   
3.
Mixed model-based estimation of additive or geoadditive regression models has become popular throughout recent years. It provides a unified and modular framework that facilitates joint estimation of nonparametric covariate effects and the corresponding smoothing parameters. Therefore, extensions of mixed model-based inference to a Cox-type regression model for the hazard rate are considered, allowing for a combination of general censoring schemes for the survival times and a flexible, geoadditive predictor. In particular, the proposed methodology allows for arbitrary combinations of right, left, and interval censoring as well as left truncation. The geoadditive predictor comprises time-varying effects, nonparametric effects of continuous covariates, spatial effects, and potentially a number of extensions such as cluster-specific frailties or interaction surfaces. In addition, all covariates are allowed to be piecewise constant time-varying. Nonlinear and time-varying effects as well as the baseline hazard rate are modeled by penalized splines. Spatial effects can be included based on either Markov random fields or stationary Gaussian random fields. Estimation is based on a reparametrization of the model as a variance component mixed model. The variance parameters, corresponding to inverse smoothing parameters, can then be determined using an approximate marginal likelihood approach. An analysis on childhood mortality in Nigeria serves as an application, where the interval censoring framework additionally allows to deal with the problem of heaped survival times. The effect of ignoring the impact of interval-censored observations is investigated in a simulation study.  相似文献   
4.
Forecasting with many predictors is of interest, for instance, in macroeconomics and finance. The forecast accuracy of two methods for dealing with many predictors is compared, that is, principal component regression (PCR) and principal covariate regression (PCovR). Simulation experiments with data generated by factor models and regression models indicate that, in general, PCR performs better for the first type of data and PCovR performs better for the second type of data. An empirical application to four key US macroeconomic variables shows that PCovR achieves improved forecast accuracy in some situations.  相似文献   
5.
A study is presented showing how three state-of-the-art algorithms from the Face Recognition Vendor Test 2006 (FRVT 2006) are effected by factors related to face images and people. The recognition scenario compares highly controlled images to images taken of people as they stand before a camera in settings such as hallways and outdoors in front of buildings. A Generalized Linear Mixed Model (GLMM) is used to estimate the probability an algorithm successfully verifies a person conditioned upon the factors included in the study. The factors associated with people are: Gender, Race, Age and whether they wear Glasses. The factors associated with images are: the size of the face, edge density and region density. The setting, indoors versus outdoors, is also a factor. Edge density can change the estimated probability of verification dramatically, for example from about 0.15 to 0.85. However, this effect is not consistent across algorithm or setting. This finding shows that simple measurable factors are capable of characterizing face quality; however, these factors typically interact with both algorithm and setting.  相似文献   
6.
Reliability Meets Big Data: Opportunities and Challenges   总被引:1,自引:0,他引:1  
ABSTRACT

Reliability field data such as that obtained from warranty claims and maintenance records have been used traditionally for such purposes as generating predictions for warranty costs and optimizing the cost of system operation and maintenance. In the current (and future) generation of many products, the nature of field reliability data is changing dramatically. In particular, products can be outfitted with sensors that can be used to capture information about how and when and under what environmental and operating conditions products are being used. Today some of that information is being used to monitor system health and interest is building to develop prognostic information systems. There are, however, many other potential applications for using such data. In this article we review some applications where field reliability data are used and explore some of the opportunities to use modern reliability data to provide stronger statistical methods to operate and predict the performance of systems in the field. We also provide some examples of recent technical developments designed to be used in such applications and outline remaining challenges.  相似文献   
7.
V. Slimacek  B. H. Lindqvist 《风能》2016,19(11):1991-2002
Reliability of wind turbines is analyzed with the use of an easily interpretable mathematical model based on a Poisson process, which takes into account jointly observable differences between turbines described by covariates (type of turbine, size of turbine, harshness of environment, installation date and seasonal effects) as well as unobservable differences modeled by a standard frailty approach known from survival analysis. The introduced model is applied to failure data from the WMEP database, and the fit of the model is checked. The paper demonstrates the usefulness of the model for determination of critical factors of wind turbine reliability, with potential for prediction for future installations. In particular, the model's ability to take into account unobserved heterogeneity is demonstrated. The model can easily be adapted for use with different datasets or for analysis of other repairable systems than wind turbines. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
8.
本文分析了几种常用的备件计算模型,考虑到武器装备在不同的使用环境下,各种协变量对装备可靠性特性的影响,给出了综合时间因素和协变量影响下的失效率和可靠度计算公式,并在此基础上给出了协变量影响下的备件计算公式.该方法在实际备件分析和计算中有指导意义和一定的实际应用价值.  相似文献   
9.
Joint models for longitudinal and time-to-event data have recently attracted a lot of attention in statistics and biostatistics. Even though these models enjoy a wide range of applications in many different statistical fields, they have not yet found their rightful place in the toolbox of modern applied statisticians mainly due to the fact that they are rather computationally intensive to fit. The main difficulty arises from the requirement for numerical integration with respect to the random effects. This integration is typically performed using Gaussian quadrature rules whose computational complexity increases exponentially with the dimension of the random-effects vector. A solution to overcome this problem is proposed using a pseudo-adaptive Gauss-Hermite quadrature rule. The idea behind this rule is to use information for the shape of the integrand by separately fitting a mixed model for the longitudinal outcome. Simulation studies show that the pseudo-adaptive rule performs excellently in practice, and is considerably faster than the standard Gauss-Hermite rule.  相似文献   
10.
In this paper we introduce a general extreme-value regression model and derive Cox and Snell’s (1968) general formulae for second-order biases of maximum likelihood estimates (MLEs) of the parameters. We obtain formulae which can be computed by means of weighted linear regressions. Furthermore, we give the skewness of order n−1/2 of the maximum likelihood estimators of the parameters by using Bowman and Shenton’s (1988) formula. A simulation study with results obtained with the use of Cox and Snell’s (1968) formulae is discussed. Practical uses of this model and of the derived formulae for bias correction are also presented.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号