首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The identification and representation of uncertainty is recognized as an essential component in model applications. One important approach in the identification of uncertainty is sensitivity analysis. Sensitivity analysis evaluates how the variations in the model output can be apportioned to variations in model parameters. One of the most popular sensitivity analysis techniques is Fourier amplitude sensitivity test (FAST). The main mechanism of FAST is to assign each parameter with a distinct integer frequency (characteristic frequency) through a periodic sampling function. Then, for a specific parameter, the variance contribution can be singled out of the model output by the characteristic frequency based on a Fourier transformation. One limitation of FAST is that it can only be applied for models with independent parameters. However, in many cases, the parameters are correlated with one another. In this study, we propose to extend FAST to models with correlated parameters. The extension is based on the reordering of the independent sample in the traditional FAST. We apply the improved FAST to linear, nonlinear, nonmonotonic and real application models. The results show that the sensitivity indices derived by FAST are in a good agreement with those from the correlation ratio sensitivity method, which is a nonparametric method for models with correlated parameters.  相似文献   

2.
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.  相似文献   

3.
Current constraint-based approaches to the discovery of causal structure in statistical data are unable to discriminate between causal models which entail identical sets of marginal dependencies. Often, marginal dependencies between observed variables are the result of complex causal connections involving observed and latent variables. This paper shows that, in such cases, the latent causal structure in a model often entails properties which can be tested against empirical evidence, and thus used to discriminate between equivalent alternative models of an empirical phenomenon under study.  相似文献   

4.
Assessing the time-varying sensitivity of environmental models has become a common approach to understand both the value of different data periods for estimating specific parameters, and as part of a diagnostic analysis of the model structure itself (i.e. whether dominant processes are emerging in the model at the right times and over the appropriate time periods). It is not straightforward to visualize these results though, given that the window size over which the time-varying sensitivity is best integrated generally varies for different parameters. In this short communication we present a new approach to visualizing such time-varying sensitivity across time scales of integration. As a case study, we estimate first order sensitivity indices with the FAST (Fourier Amplitude Sensitivity Test) method for a typical conceptual rainfall–runoff model. The resulting plots can guide data selection for model calibration, support diagnostic model evaluation and help to define the timing and length of spot gauging campaigns in places where long-term calibration data are not yet available.  相似文献   

5.
Recently, a nonparametric marginal structural model (NPMSM) approach to Causal Inference has been proposed [Neugebauer, R., van der Laan, M., 2006. Nonparametric causal effects based on marginal structural models. J. Statist. Plann. Inference (in press), 〈www http://www.sciencedirect.com/science/journal/03783758〉.] as an appealing practical alternative to the original parametric MSM (PMSM) approach introduced by Robins [Robins, J., 1998a. Marginal structural models. In: 1997 Proceedings of the American Statistical Association, American Statistical Association, Alexandria, VA, pp. 1-10]. The new MSM-based causal inference methodology generalizes the concept of causal effects: the proposed nonparametric causal effects are interpreted as summary measures of the causal effects defined with PMSMs. In addition, causal inference with NPMSM does not rely on the assumed correct specification of a parametric MSM but instead defines causal effects based on a user-specified working causal model which can be willingly misspecified. The NPMSM approach was developed for studies with point treatment data or with longitudinal data where the outcome is not time-dependent (typically collected at the end of data collection). In this paper, we generalize this approach to longitudinal studies where the outcome is time-dependent, i.e. collected throughout the span of the studies, and address the subsequent estimation inconsistency which could easily arise from a hasty generalization of the algorithm for maximum likelihood estimation. More generally, we provide an overview of the multiple causal effect representations which have been developed based on MSMs in longitudinal studies.  相似文献   

6.
Bayesian analysis of empirical software engineering cost models   总被引:1,自引:0,他引:1  
Many parametric software estimation models have evolved in the last two decades (L.H. Putnam and W. Myers, 1992; C. Jones, 1997; R.M. Park et al., 1992). Almost all of these parametric models have been empirically calibrated to actual data from completed software projects. The most commonly used technique for empirical calibration has been the popular classical multiple regression approach. As discussed in the paper, the multiple regression approach imposes a few assumptions frequently violated by software engineering datasets. The paper illustrates the problems faced by the multiple regression approach during the calibration of one of the popular software engineering cost models, COCOMO II. It describes the use of a pragmatic 10 percent weighted average approach that was used for the first publicly available calibrated version (S. Chulani et al., 1998). It then moves on to show how a more sophisticated Bayesian approach can be used to alleviate some of the problems faced by multiple regression. It compares and contrasts the two empirical approaches, and concludes that the Bayesian approach was better and more robust than the multiple regression approach  相似文献   

7.
The problem of the identification of dependencies between time series of equity returns is analyzed. Marginal distribution functions are assumed to be known, and a bivariate chi-square test of fit is applied in a fully parametric copula approach. Several marginal models and families of copulas are fitted and compared with Spanish stock market data. The results show the difficulty in adjusting the bivariate distribution of raw returns, and highlight the effect of a GARCH filtering in the selection of the best fitting copula.  相似文献   

8.
This paper investigates the commonly overlooked “sensitivity” of sensitivity analysis (SA) to what we refer to as parameter “perturbation scale”, which can be defined as a prescribed size of the sensitivity-related neighbourhood around any point in the parameter space (analogous to step size Δx for numerical estimation of derivatives). We discuss that perturbation scale is inherent to any (local and global) SA approach, and explain how derivative-based SA approaches (e.g., method of Morris) focus on small-scale perturbations, while variance-based approaches (e.g., method of Sobol) focus on large-scale perturbations. We employ a novel variogram-based approach, called Variogram Analysis of Response Surfaces (VARS), which bridges derivative- and variance-based approaches. Our analyses with different real-world environmental models demonstrate significant implications of subjectivity in the perturbation-scale choice and the need for strategies to address these implications. It is further shown how VARS can uniquely characterize the perturbation-scale dependency and generate sensitivity measures that encompass all sensitivity-related information across the full spectrum of perturbation scales.  相似文献   

9.
A fuzzy regression model is developed to construct the relationship between the response and explanatory variables in fuzzy environments. To enhance explanatory power and take into account the uncertainty of the formulated model and parameters, a new operator, called the fuzzy product core (FPC), is proposed for the formulation processes to establish fuzzy regression models with fuzzy parameters using fuzzy observations that include fuzzy response and explanatory variables. In addition, the sign of parameters can be determined in the model-building processes. Compared to existing approaches, the proposed approach reduces the amount of unnecessary or unimportant information arising from fuzzy observations and determines the sign of parameters in the models to increase model performance. This improves the weakness of the relevant approaches in which the parameters in the models are fuzzy and must be predetermined in the formulation processes. The proposed approach outperforms existing models in terms of distance, mean similarity, and credibility measures, even when crisp explanatory variables are used.  相似文献   

10.
While the relation between code coverage measures and fault detection is actively studied, only few works have investigated the correlation between measures of coverage and of reliability. In this work, we introduce a novel approach to measuring code coverage, called the operational coverage, that takes into account how much the program’s entities are exercised so to reflect the profile of usage into the measure of coverage. Operational coverage is proposed as (i) an adequacy criterion, i.e., to assess the thoroughness of a black box test suite derived from the operational profile, and as (ii) a selection criterion, i.e., to select test cases for operational profile-based testing. Our empirical evaluation showed that operational coverage is better correlated than traditional coverage with the probability that the next test case derived according to the user’s profile will not fail. This result suggests that our approach could provide a good stopping rule for operational profile-based testing. With respect to test case selection, our investigations revealed that operational coverage outperformed the traditional one in terms of test suite size and fault detection capability when we look at the average results.  相似文献   

11.
A relatively simple class of parametric dynamic models for simulating the growth of unevenaged forest stands, expressed in terms of diameter classes, is presented and discussed. The problem of model validation towards the available growth data is formulated as a non-linear multipoint boundary value problem and solved via a heuristic iterative procedure which takes into account the parameter sensitivity of models. Simulation results are shown.  相似文献   

12.
While parametric copulas often lack expressive capacity to capture the complex dependencies that are usually found in empirical data, non-parametric copulas can have poor generalization performance because of overfitting. A semiparametric copula method based on the family of bivariate Archimedean copulas is introduced as an intermediate approach that aims to provide both accurate and robust fits. The Archimedean copula is expressed in terms of a latent function that can be readily represented using a basis of natural cubic splines. The model parameters are determined by maximizing the sum of the log-likelihood and a term that penalizes non-smooth solutions. The performance of the semiparametric estimator is analyzed in experiments with simulated and real-world data, and compared to other methods for copula estimation: three parametric copula models, two semiparametric estimators of Archimedean copulas previously introduced in the literature, two flexible copula methods based on Gaussian kernels and mixtures of Gaussians and finally, standard parametric Archimedean copulas. The good overall performance of the proposed semiparametric Archimedean approach confirms the capacity of this method to capture complex dependencies in the data while avoiding overfitting.  相似文献   

13.
Time series representation and similarity based on local autopatterns   总被引:1,自引:0,他引:1  
Time series data mining has received much greater interest along with the increase in temporal data sets from different domains such as medicine, finance, multimedia, etc. Representations are important to reduce dimensionality and generate useful similarity measures. High-level representations such as Fourier transforms, wavelets, piecewise polynomial models, etc., were considered previously. Recently, autoregressive kernels were introduced to reflect the similarity of the time series. We introduce a novel approach to model the dependency structure in time series that generalizes the concept of autoregression to local autopatterns. Our approach generates a pattern-based representation along with a similarity measure called learned pattern similarity (LPS). A tree-based ensemble-learning strategy that is fast and insensitive to parameter settings is the basis for the approach. Then, a robust similarity measure based on the learned patterns is presented. This unsupervised approach to represent and measure the similarity between time series generally applies to a number of data mining tasks (e.g., clustering, anomaly detection, classification). Furthermore, an embedded learning of the representation avoids pre-defined features and an extraction step which is common in some feature-based approaches. The method generalizes in a straightforward manner to multivariate time series. The effectiveness of LPS is evaluated on time series classification problems from various domains. We compare LPS to eleven well-known similarity measures. Our experimental results show that LPS provides fast and competitive results on benchmark datasets from several domains. Furthermore, LPS provides a research direction and template approach that breaks from the linear dependency models to potentially foster other promising nonlinear approaches.  相似文献   

14.
This paper presents a new model for estimating optical flow based on the motion of planar regions plus local deformations. The approach exploits brightness information to organize and constrain the interpretation of the motion by using segmented regions of piecewise smooth brightness to hypothesize planar regions in the scene. Parametric flow models are estimated in these regions in a two step process which first computes a coarse fit and then estimates the appropriate parametrization of the motion of the region. The initial fit is refined using a generalization of the standard area-based regression approaches. Since the assumption of planarity is likely to be violated, we allow local deformations from the planar assumption in the same spirit as physically-based approaches which model shape using coarse parametric models plus local deformations. This parametric plus deformation model exploits the strong constraints of parametric approaches while retaining the adaptive nature of regularization approaches. Experimental results on a variety of images model produces accurate flow estimates while the incorporation of brightness segmentation boundaries  相似文献   

15.
In real engineering, the observations of process variables are usually imprecise, uncertain, or both. In such cases, the general process modeling approaches cannot be implemented. In this paper, we investigate on the parametric and nonparametric evidential regression of imprecise and uncertain data, represented as belief function on interval-valued variables. The parametric evidential regression includes both multiple linear and nonlinear evidential regression models. The nonlinear evidential regression model is derived by introducing kernel function into the multiple linear evidential regression model. The parametric evidential regression models are identified by using evidential EM algorithm, an evidential extension of the EM algorithm. In the nonparametric evidential regression, the prediction for a given input vector is computed using a nonparametric, instance-based approach: the training samples in the neighborhood of the given input vector provide pieces of evidence reflecting the values taken by such input vector, these pieces of evidence are combined to form the prediction. Some unreliable sensor experiments are designed to validate the performances of the proposed parametric and nonparametric evidential regression models. With comparative studies, we get some interesting results.  相似文献   

16.
Temporal dependency is a very important cue for modeling human actions. However, approaches using latent topics models, e.g., probabilistic latent semantic analysis (pLSA), employ the bag of words assumption therefore word dependencies are usually ignored. In this work, we propose a new approach structural pLSA (SpLSA) to model explicitly word orders by introducing latent variables. More specifically, we develop an action categorization approach that learns action representations as the distribution of latent topics in an unsupervised way, where each action frame is characterized by a codebook representation of local shape context. The effectiveness of this approach is evaluated using both the WEIZMANN dataset and the MIT dataset. Results show that the proposed approach outperforms the standard pLSA. Additionally, our approach is compared favorably with six existing models including GMM, logistic regression, HMM, SVM, CRF, and HCRF given the same feature representation. These comparative results show that our approach achieves higher categorization accuracy than the five existing models and is comparable to the state-of-the-art hidden conditional random field based model using the same feature set.  相似文献   

17.
Low gain feedback, a parameterized family of stabilizing state feedback gains whose magnitudes approach zero as the parameter decreases to zero, has found several applications in constrained control systems, robust control and nonlinear control. In the continuous-time setting, there are currently three ways of constructing low gain feedback laws: the eigenstructure assignment approach, the parametric ARE based approach and the parametric Lyapunov equation based approach. The eigenstructure assignment approach leads to feedback gains explicitly parameterized in the low gain parameter. The parametric ARE based approach results in a Lyapunov function along with the feedback gain, but requires the solution of an ARE for each value of the parameter. The parametric Lyapunov equation based approach possesses the advantages of the first two approaches and results in both an explicitly parameterized feedback gains and a Lyapunov function. The first two approaches have been extended to discrete-time setting. This paper develops the parametric Lyapunov equation based approach to low gain feedback design for discrete-time systems.  相似文献   

18.
Clustered failure time data often arise in biomedical studies and a marginal regression modeling approach is often preferred to avoid assumption on the dependence structure within clusters. A novel estimating equation approach is proposed based on a semiparametric marginal proportional hazards model to take the correlation within clusters into account. Different from the traditional marginal method for clustered failure time data, our method explicitly models the correlation structure within clusters by using a pre-specified working correlation matrix. The estimates from the proposed method are proved to be consistent and asymptotically normal. Simulation studies show that the proposed method is more efficient than the existing marginal methods. Finally, the model and the proposed method are applied to a kidney infections study.  相似文献   

19.
This work illustrates simulation approach for optimizing the parametric design and performance of a 2-DOF R–R planar manipulator. Using dynamic and kinematic models of a manipulator different performance measures for the manipulator are obtained for different combination of parameters with effect of noise incorporated to imitate the real time performance of the manipulator. A novel approach has been proposed to model, the otherwise difficult to model, noise effects. The data generated during simulation for various parameter combinations are utilized to analyze the statistical significance of kinematic and dynamic parameters on performance of manipulator using ANOVA technique. The parameter combinations, which give optimum performance measures obtained for different points in workspace, are compared and reported.  相似文献   

20.
In survival analysis applications, the failure rate function may frequently present a unimodal shape. In such case, the log-normal or log-logistic distributions are used. In this paper, we shall be concerned only with parametric forms, so a location-scale regression model based on the Burr XII distribution is proposed for modeling data with a unimodal failure rate function as an alternative to the log-logistic regression model. Assuming censored data, we consider a classic analysis, a Bayesian analysis and a jackknife estimator for the parameters of the proposed model. For different parameter settings, sample sizes and censoring percentages, various simulation studies are performed and compared to the performance of the log-logistic and log-Burr XII regression models. Besides, we use sensitivity analysis to detect influential or outlying observations, and residual analysis is used to check the assumptions in the model. Finally, we analyze a real data set under log-Burr XII regression models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号