首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider varying coefficient models which are an extension of the classical linear regression models in the sense that the regression coefficients are replaced by functions in certain variables (often time). Varying coefficient models have been popular in longitudinal data and panel data studies, and have been applied in fields, such as finance and health sciences. We estimate the coefficient functions by splines. An important question in a varying coefficient model is whether a coefficient function is monotone or convex. We develop consistent testing procedures for monotonicity and convexity. Moreover, we provide procedures to test simultaneously the shapes of certain coefficient functions in a varying coefficient model. The tests use constrained and unconstrained regression splines. The performances of the proposed tests are illustrated on simulated data. We also give a real data application.  相似文献   

2.
We consider P-spline smoothing in a varying coefficient regression model when the response is subject to random right censoring. We introduce two data transformation approaches to construct a synthetic response vector that is used in a penalized least squares optimization problem. We prove the consistency and asymptotic normality of the P-spline estimators for a diverging number of knots and show by simulation studies and real data examples that the combination of a data transformation for censored observations with P-spline smoothing leads to good estimators of the varying coefficient functions.  相似文献   

3.
An algorithm is developed for the simultaneous optimization of several response functions that depend on the same set of controllable variables and are adequately represented by polynomial regression models of the same degree. The data are first checked for linear dependencies among the responses. If such dependencies exist, a basic set of responses among which no linear functional relationships exist is chosen and used in developing a function that measures the distance of the vector of estimated responses from the estimated “ideal” optimum. This distance function permits the user to account for the variances and covariances of the estimated responses and for the random error variation associated with the estimated ideal optimum. Suitable operating conditions for the simultaneous optimization of the responses are specified by minimizing the prescribed distance function over the experimental region. An extension of the optimization procedure to mixture experiments is also given and the method is illustrated by two examples.  相似文献   

4.
We consider a varying coefficient regression model for sparse functional data, with time varying response variable depending linearly on some time-independent covariates with coefficients as functions of time-dependent covariates. Based on spline smoothing, we propose data-driven simultaneous confidence corridors for the coefficient functions with asymptotically correct confidence level. Such confidence corridors are useful benchmarks for statistical inference on the global shapes of coefficient functions under any hypotheses. Simulation experiments corroborate with the theoretical results. An example in CD4/HIV study is used to illustrate how inference is made with computable p values on the effects of smoking, pre-infection CD4 cell percentage and age on the CD4 cell percentage of HIV infected patients under treatment.  相似文献   

5.
We consider engineering design optimization problems where the objective and/or constraint functions are evaluated by means of computationally expensive blackboxes. Our practical optimization strategy consists of solving surrogate optimization problems in the search step of the mesh adaptive direct search algorithm. In this paper, we consider locally weighted regression models to build the necessary surrogates, and present three ideas for appropriate and effective use of locally weighted scatterplot smoothing (LOWESS) models for surrogate optimization. First, a method is proposed to reduce the computational cost of LOWESS models. Second, a local scaling coefficient is introduced to adapt LOWESS models to the density of neighboring points while retaining smoothness. Finally, an appropriate order error metric is used to select the optimal shape coefficient of the LOWESS model. Our surrogate-assisted optimization approach utilizes LOWESS models to both generate and rank promising candidates found in the search and poll steps. The “real” blackbox functions that govern the original optimization problem are then evaluated at these ranked candidates with an opportunistic strategy, reducing CPU time significantly. Computational results are reported for four engineering design problems with up to six variables and six constraints. The results demonstrate the effectiveness of the LOWESS models as well as the order error metric for surrogate optimization.  相似文献   

6.
Many popular forecasting and time-series analysis methods assume that the variable to be forecast can be expressed as a linear function of a set of predictors. The predictors may include variables related in either a correlative or causal fashion to the response variable, lagged values of this variable, or known mathematical functions of time. The method of least squares is used almost exclusively to estimate the parameters in these models. This paper discusses two hazards in the indiscriminant use of least squares; nonnormality of the observations on the variable of interest and multicollinearity among the predictors. Robust estimation methods are suggested as alternatives to least squares for nonnormal data, and a robust version of exponential smoothing is developed. A small Monte Carlo study indicates that the robust procedure can be superior to ordinary exponential smoothing in many situations. The sources and effects of multicollinearity are discussed, and several diagnostic statistics are presented. Methods for dealing with multicollinearity are reviewed, including collecting additional data, variable selection, and biased estimation. An example of ridge regression is included.  相似文献   

7.
The conditional variance function in a heteroscedastic, nonparametric regression model is estimated by linear smoothing of squared residuals. Attention is focused on local polynomial smoothers. Both the mean and variance functions are assumed to be smooth, but neither is assumed to be in a parametric family. The biasing effect of preliminary estimation of the mean is studied, and a degrees-of-freedom correction of bias is proposed. The corrected method is shown to be adaptive in the sense that the variance function can be estimated with the same asymptotic mean and variance as if the mean function were known. A proposal is made for using standard bandwidth selectors for estimating both the mean and variance functions. The proposal is illustrated with data from the LIDAR method of measuring atmospheric pollutants and from turbulence-model computations.  相似文献   

8.
Bernstein estimators attracted considerable attention as smooth nonparametric estimators for distribution functions, densities, copulas and copula densities. The present paper adds a parallel result for the first-order derivative of a copula function. This result then leads to Bernstein estimators for a conditional distribution function and its important functionals such as the regression and quantile functions. Results of independent interest have been derived such as an almost sure oscillation behavior of the empirical copula process and a Bahadur-type almost sure asymptotic representation for the Bernstein estimator of a regression quantile function. Simulations demonstrate the good performance of the proposed estimators.  相似文献   

9.
Quantile regression as an alternative to conditional mean regression (i.e., least-square regression) is widely used in many areas. It can be used to study the covariate effects on the entire response distribution by fitting quantile regression models at multiple different quantiles or even fitting the entire regression quantile process. However, estimating the regression quantile process is inherently difficult because the induced conditional quantile function needs to be monotone at all covariate values. In this article, we proposed a regression quantile process estimation method based on monotone B-splines. The proposed method can easily ensure the validity of the regression quantile process and offers a concise framework for variable selection and adaptive complexity control. We thoroughly investigated the properties of the proposed procedure, both theoretically and numerically. We also used a case study on wind power generation to demonstrate its use and effectiveness in real problems. Supplementary materials for this article are available online.  相似文献   

10.
Stress‐related problems have not been given the same attention as the minimum compliance topological optimization problem in the literature. Continuum structural topological optimization with stress constraints is of wide engineering application prospect, in which there still are many problems to solve, such as the stress concentration, an equivalent approximate optimization model and etc. A new and effective topological optimization method of continuum structures with the stress constraints and the objective function being the structural volume has been presented in this paper. To solve the stress concentration issue, an approximate stress gradient evaluation for any element is introduced, and a total aggregation normalized stress gradient constraint is constructed for the optimized structure under the r?th load case. To obtain stable convergent series solutions and enhance the control on the stress level, two p‐norm global stress constraint functions with different indexes are adopted, and some weighting p‐norm global stress constraint functions are introduced for any load case. And an equivalent topological optimization model with reduced stress constraints is constructed,being incorporated with the rational approximation for material properties, an active constraint technique, a trust region scheme, and an effective local stress approach like the qp approach to resolve the stress singularity phenomenon. Hence, a set of stress quadratic explicit approximations are constructed, based on stress sensitivities and the method of moving asymptotes. A set of algorithm for the one level optimization problem with artificial variables and many possible non‐active design variables is proposed by adopting an inequality constrained nonlinear programming method with simple trust regions, based on the primal‐dual theory, in which the non‐smooth expressions of the design variable solutions are reformulated as smoothing functions of the Lagrange multipliers by using a novel smoothing function. Finally, a two‐level optimization design scheme with active constraint technique, i.e. varied constraint limits, is proposed to deal with the aggregation constraints that always are of loose constraint (non active constraint) features in the conventional structural optimization method. A novel structural topological optimization method with stress constraints and its algorithm are formed, and examples are provided to demonstrate that the proposed method is feasible and very effective. © 2016 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons Ltd.  相似文献   

11.
We study joint nonparametric estimators of the mean and the dispersion functions in extended double exponential family models. The starting point is the exponential family and the generalized linear models setting. The extended models allow for both overdispersion and underdispersion, or even a combination of both. We simultaneously estimate the dispersion function and the mean function by using P-splines with a difference type of penalty to avoid overfitting. Special attention is given to the smoothing parameter selection as well as to implementation issues. The performance of the method is investigated via simulations. A comparison with other available methods is made. We provide applications to several sets of data, including continuous data, counts and proportions.  相似文献   

12.
Quantile regression is an important tool to determine the quality level of service, product, and operation systems via stochastic simulation. It is frequently known that the quantiles of the output distribution are monotonic functions of certain inputs to the simulation model. Because there is typically high variability in estimation of tail quantiles, it can be valuable to incorporate this information in quantile modeling. However, the existing literature on monotone quantile regression with multiple inputs is sparse. In this article, we propose a class of monotonic regression models, which consists of functional analysis of variance (FANOVA) decomposition components modeled with Bernstein polynomial bases for estimating quantiles as a function of multiple inputs. The polynomial degrees of the bases for the model and the FANOVA components included in the model are selected by a greedy algorithm. Real examples demonstrate the advantages of incorporating the monotonicity assumption in quantile regression and the good performance of the proposed methodology for estimating quantiles. Supplementary materials for this article are available online.  相似文献   

13.
Considerable interest already exists in terms of assessing percentiles of speed distributions, for example monitoring the 85th percentile speed is a common feature of the investigation of many road safety interventions. However, unlike the mean, where t-tests and ANOVA can be used to provide evidence of a statistically significant change, inference on these percentiles is much less common. This paper examines the potential role of quantile regression for modelling the 85th percentile, or any other quantile. Given that crash risk may increase disproportionately with increasing relative speed, it may be argued these quantiles are of more interest than the conditional mean. In common with the more usual linear regression, quantile regression admits a simple test as to whether the 85th percentile speed has changed following an intervention in an analogous way to using the t-test to determine if the mean speed has changed by considering the significance of parameters fitted to a design matrix. Having briefly outlined the technique and briefly examined an application with a widely published dataset concerning speed measurements taken around the introduction of signs in Cambridgeshire, this paper will demonstrate the potential for quantile regression modelling by examining recent data from Northamptonshire collected in conjunction with a "community speed watch" programme. Freely available software is used to fit these models and it is hoped that the potential benefits of using quantile regression methods when examining and analysing speed data are demonstrated.  相似文献   

14.
In optimization under uncertainty for engineering design, the behavior of the system outputs due to uncertain inputs needs to be quantified at each optimization iteration, but this can be computationally expensive. Multifidelity techniques can significantly reduce the computational cost of Monte Carlo sampling methods for quantifying the effect of uncertain inputs, but existing multifidelity techniques in this context apply only to Monte Carlo estimators that can be expressed as a sample average, such as estimators of statistical moments. Information reuse is a particular multifidelity method that treats previous optimization iterations as lower fidelity models. This work generalizes information reuse to be applicable to quantities whose estimators are not sample averages. The extension makes use of bootstrapping to estimate the error of estimators and the covariance between estimators at different fidelities. Specifically, the horsetail matching metric and quantile function are considered as quantities whose estimators are not sample averages. In an optimization under uncertainty for an acoustic horn design problem, generalized information reuse demonstrated computational savings of over 60% compared with regular Monte Carlo sampling.  相似文献   

15.
This paper deals with the large sample estimation of functions such as the quantile function, survival function, and the hazard function of the chi distribution using a few optimally selected order statistics. These functions arise in the study of life models and are functions of the location and scale parameters. The optimum ranks of the order statistics are obtained by maximizing the asymptotic relative efficiencies.  相似文献   

16.
针对频率约束的结构材料优化问题,基于结构拓扑优化思想,提出变频率区间约束的结构材料优化方法。借鉴均匀化及ICM(独立、连续、映射)方法,以微观单元拓扑变量倒数为设计变量,导出宏观单元等效质量矩阵及导数,进而获得频率一阶近似展开式。结合变频率区间约束思想,获得以结构质量为目标函数、频率为约束条件的连续体微结构拓扑优化近似模型;采用对偶方法求解。通过算例验证该方法的有效性及可行性,表明考虑质量矩阵变化影响所得优化结果更合理。  相似文献   

17.
Epoxy dispensing is one of the popular processes to perform microchip encapsulation for chip-on-board (COB) packages. However, determination of proper process parameters setting for optimal quality of the encapsulation is difficult due to the complex behaviour of the encapsulant during dispensing and the uncertainties caused by fuzziness of epoxy dispensing systems. In conventional regression models, deviations between the observed values and the estimated values are supposed to be in probability distribution. However, when data is irregular, the obtained regression model has an unnaturally wide possibility range. In fact, these deviations in some processes such as epoxy dispensing can be regarded as system fuzziness that can be dealt with properly using fuzzy regression method. In this paper, a fuzzy regression approach with fuzzy intervals to process modelling of epoxy dispensing for microchip encapsulation is described. Two fuzzy regression models relating three process parameters and two quality characteristics respectively for epoxy dispensing were developed. They were then introduced to formulate a fuzzy multi-objective optimization problem. A fuzzy linear programming technique was employed to formulate the optimization model. By solving the model, an optimal setting of process parameters can be obtained. Validation experiments were conducted to evaluate the effectiveness of the proposed approach to process modelling and optimization of epoxy dispensing for microchip encapsulation.  相似文献   

18.
By assuming a parametric model for a linear one-port or two-port, the time-domain resolution of a vector network analyzer can be significantly improved with respect to the Rayleigh limit. The measurement problem is formulated as a nonlinear least squares parameter estimation problem involving the extremization of a cost function. An extremization algorithm with good global convergence properties is presented for the case of discontinuities of small reflectivity modeled as simple lumped frequency-dependent elements. The reflection coefficient at either port of the device under test is modeled as a superposition of modulated complex sinusoids. Through optimization of a sequence of cost functions, the algorithm produces a sequence of fits for models that incorporate an increasing number of discontinuities  相似文献   

19.
Desirability functions (DFs) are commonly used in optimization of design parameters with multiple quality characteristic to obtain a good compromise among predicted response models obtained from experimental designs. Besides discussing multi-objective approaches for optimization of DFs, we present a brief review of literature about most commonly used Derringer and Suich type of DFs and others as well as their capabilities and limitations. Optimization of DFs of Derringer and Suich is a challenging problem. Although they have an advantageous shape over other DFs, their nonsmooth nature is a drawback. Commercially available software products used by quality engineers usually do optimization of these functions by derivative free search methods on the design domain (such as Design-Expert), which involves the risk of not finding the global optimum in a reasonable time. Use of gradient-based methods (as in MINITAB) after smoothing nondifferentiable points is also proposed as well as different metaheuristics and interactive multi-objective approaches, which have their own drawbacks. In this study, by utilizing a reformulation on DFs, it is shown that the nonsmooth optimization problem becomes a nonconvex mixed-integer nonlinear problem. Then, a continuous relaxation of this problem can be solved with nonconvex and global optimization approaches supported by widely available software programs. We demonstrate our findings on two well-known examples from the quality engineering literature and their extensions.  相似文献   

20.
Sparse penalized quantile regression is a useful tool for variable selection, robust estimation, and heteroscedasticity detection in high-dimensional data analysis. The computational issue of the sparse penalized quantile regression has not yet been fully resolved in the literature, due to nonsmoothness of the quantile regression loss function. We introduce fast alternating direction method of multipliers (ADMM) algorithms for computing the sparse penalized quantile regression. The convergence properties of the proposed algorithms are established. Numerical examples demonstrate the competitive performance of our algorithm: it significantly outperforms several other fast solvers for high-dimensional penalized quantile regression. Supplementary materials for this article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号