首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
To infer on functional dependence of regression parameters, a new, factor based bootstrap approach is introduced, that is robust under various forms of heteroskedastic error terms. Modeling the functional coefficient parametrically, the bootstrap approximation of an F-statistic is shown to hold asymptotically. In simulation studies with both parametric and nonparametric functional coefficients, factor based bootstrap inference outperforms the wild bootstrap and pairs bootstrap approach, according to its rejection frequencies under the null hypothesis. Applying the functional coefficient model to a cross sectional investment regression on savings, the saving retention coefficient is found to depend on third variables as the population growth rate and the openness ratio.  相似文献   

2.
A possible approach to test for conditional symmetry in time series regression models is discussed. To that end, the Bai and Ng test is utilized. The performance of some popular (unconditional) symmetry tests for observations when applied to regression residuals is also examined. The tests considered include the coefficient of skewness, a joint test of the third and fifth moments, the Runs test, the Wilcoxon signed-rank test and the Triples test. An easy-to-implement symmetric bootstrap procedure is proposed to calculate critical values for these tests. Consistency of the bootstrap procedure will be shown. A simple Monte Carlo experiment is conducted to explore the finite-sample properties of all the tests.  相似文献   

3.
In this paper we investigate bootstrap techniques applied to the estimation of the fractional differential parameter in ARFIMA models, d. The novelty is the focus on the local bootstrap of the periodogram function. The approach is then applied to three different semiparametric estimators of d, known from the literature, based upon the periodogram function. By means of an extensive set of simulation experiments, the bias and mean square errors are quantified for each estimator and the efficacy of the local bootstrap is stated in terms of low bias, short confidence intervals, and low CPU times. Finally, a real data set is analyzed to demonstrate that the methodology may be quite effective in solving real problems.  相似文献   

4.
It is known that the least-squares (LS) class of algorithms produce unbiased estimates providing certain assumptions are met. There are many practical problems, however, where the required assumptions are violated. Typical examples include non-linear dynamical system identification problems, where the input and output observations are affected by measurement uncertainty and possibly correlated noise. This will result in biased LS estimates and the identified model will exhibit poor generalisation properties. Model estimation for this type of error-in-variables problem is investigated in this study, and a new identification scheme based on a bootstrap algorithm is proposed to improve the model estimates for non-linear dynamical system identification.  相似文献   

5.
A statistical minimax method for optimizing linear models with parameters, given up to the accuracy of belonging to some uncertainty sets, is proposed. Statistical methods for constructing uncertainty sets as confidence regions with a given reliability level are presented. A numerical method for finding a minimax strategy is proposed for arbitrary uncertainty sets that meet convexity and compactness conditions. A number of examples are considered that admit the analytical solution to optimization problem. Results of numerical simulation are given.  相似文献   

6.
An approach to the design of effective computer-based systems is discussed. This approach exploits the user's traditional diagrammatic notations in an effort to achieve usability for experts other than computer professionals. Notations are formalized as visual languages, thus allowing the design of visual editors, interpreters, and compilers. The users themselves exploit these tools to define a hierarchy of environments by a bootstrapping approach. By navigating within these environments, they can progressively design visual interfaces and computing tools that allow them not only to execute the required computational tasks, but also to gain insight into and control the computational process, and check the results.  相似文献   

7.
In this note, we outline a simple to use yet powerful bootstrap algorithm for handling correlated outcome variables in terms of either hypothesis testing or confidence intervals using only the marginal models. This new method can handle combinations of continuous and discrete data and can be used in conjunction with other covariates in a model. The procedure is based upon estimating the family-wise error (FWE) rate and then making a Bonferroni-type correction. A simulation study illustrates the accuracy of the algorithm over a variety of correlation structures.  相似文献   

8.
The determination of the optimal values for parameters in a continuous dynamic system model is normally a computationally intensive task. Two separate numerical processes are involved; namely, the mechanism for solving the ordinary differential equations that comprise the system model, and the function minimization procedure used to search for the optimal parameter values. Both these processes typically have embedded parameters which control their respective operations. In this paper a general approach is described for adjusting these parameters in a way which allows the two processes to function in a more integrated and hence more efficient way in solving the parameter optimization problem. A specific implementation of the approach is described and the results of an extensive set of numerical experiments are given, These results indicate that the approach can provide a significant advantage in reducing the computational effort.  相似文献   

9.
ContextSoftware quality is a complex concept. Therefore, assessing and predicting it is still challenging in practice as well as in research. Activity-based quality models break down this complex concept into concrete definitions, more precisely facts about the system, process, and environment as well as their impact on activities performed on and with the system. However, these models lack an operationalisation that would allow them to be used in assessment and prediction of quality. Bayesian networks have been shown to be a viable means for this task incorporating variables with uncertainty.ObjectiveThe qualitative knowledge contained in activity-based quality models are an abundant basis for building Bayesian networks for quality assessment. This paper describes a four-step approach for deriving systematically a Bayesian network from an assessment goal and a quality model.MethodThe four steps of the approach are explained in detail and with running examples. Furthermore, an initial evaluation is performed, in which data from NASA projects and an open source system is obtained. The approach is applied to this data and its applicability is analysed.ResultsThe approach is applicable to the data from the NASA projects and the open source system. However, the predictive results vary depending on the availability and quality of the data, especially the underlying general distributions.ConclusionThe approach is viable in a realistic context but needs further investigation in case studies in order to analyse its predictive validity.  相似文献   

10.
This article is about testing the equality of several normal means when the variances are unknown and arbitrary, i.e., the set up of the one-way ANOVA. Even though several tests are available in the literature, none of them perform well in terms of Type I error probability under various sample size and parameter combinations. In fact, Type I errors can be highly inflated for some of the commonly used tests; a serious issue that appears to have been overlooked. We propose a parametric bootstrap (PB) approach and compare it with three existing location-scale invariant tests—the Welch test, the James test and the generalized F (GF) test. The Type I error rates and powers of the tests are evaluated using Monte Carlo simulation. Our studies show that the PB test is the best among the four tests with respect to Type I error rates. The PB test performs very satisfactorily even for small samples while the Welch test and the GF test exhibit poor Type I error properties when the sample sizes are small and/or the number of means to be compared is moderate to large. The James test performs better than the Welch test and the GF test. It is also noted that the same tests can be used to test the significance of the random effect variance component in a one-way random model under unequal error variances. Such models are widely used to analyze data from inter-laboratory studies. The methods are illustrated using some examples.  相似文献   

11.
This article is about testing the equality of several normal means when the variances are unknown and arbitrary, i.e., the set up of the one-way ANOVA. Even though several tests are available in the literature, none of them perform well in terms of Type I error probability under various sample size and parameter combinations. In fact, Type I errors can be highly inflated for some of the commonly used tests; a serious issue that appears to have been overlooked. We propose a parametric bootstrap (PB) approach and compare it with three existing location-scale invariant tests—the Welch test, the James test and the generalized F (GF) test. The Type I error rates and powers of the tests are evaluated using Monte Carlo simulation. Our studies show that the PB test is the best among the four tests with respect to Type I error rates. The PB test performs very satisfactorily even for small samples while the Welch test and the GF test exhibit poor Type I error properties when the sample sizes are small and/or the number of means to be compared is moderate to large. The James test performs better than the Welch test and the GF test. It is also noted that the same tests can be used to test the significance of the random effect variance component in a one-way random model under unequal error variances. Such models are widely used to analyze data from inter-laboratory studies. The methods are illustrated using some examples.  相似文献   

12.
New sufficient conditions are derived for stability robustness of linear time-invariant state-space systems with constant real parameter uncertainty. These bounds are obtained by applying a guardian map to the uncertain system matrices. Since this approach is only valid for constant real parameter uncertainty, these bounds do not imply quadratic stability, which guarantees robust stability with respect to time-varying uncertainty but is often conservative with respect to constant real parameter uncertainty. Numerical results are given to compare the new bounds with bounds obtained previously by means of Lyapunov methods  相似文献   

13.
An asynchronous stochastic approximation based (frequentist) approach is proposed for mapping using noisy mobile sensors under two different scenarios: (1) perfectly known sensor locations and (2) uncertain sensor locations. The frequentist methodology has linear complexity in the map components, is immune to the data association problem and is provably consistent. The frequentist methodology, in conjunction with a Bayesian estimator, is applied to the Simultaneous Localization and Mapping (SLAM) problem of Robotics. Several large maps are estimated using the hybrid Bayesian/Frequentist scheme and results show that the technique is robust to the computational and performance issues inherent in the purely Bayesian approaches to the problem.  相似文献   

14.
This article treats the problem of vagueness in databases from a general point of view. Several kinds of attribute imprecise values are considered, including the case where such values are fuzzy set of objects. The possibility of managing uncertain data is also taken into account and both sources of the lack of information are studied jointly. All these vague elements are represented in a unified manner by using a semantic data model. The article shows how this representation is possible and opens the way for implementing this kind of information by using a classic object-oriented database system. © 1996 John Wiley & Sons, Inc.  相似文献   

15.
This paper describes an approach to system modeling based on heuristic mean value analysis. The virtues of the approach are conceptual simplicity and computational efficiency. The approach can be applied to a large variety of systems, and can handle features such as resource constraints, tightly and loosely coupled multiprocessors, distributed processing, and certain types of CPU priorities. Extensive validation results are presented, including truly predictive situations. The paper is intended primarily as a tutorial on the method and its applications, rather than as an exposition of research results.  相似文献   

16.
This paper describes an approach to system modeling based on heuristic mean value analysis. The virtues of the approach are conceptual simplicity and computational efficiency. The approach can be applied to a large variety of systems, and can handle features such as resource constraints, tightly and loosely coupled multiprocessors, distributed processing, and certain types of CPU priorities. Extensive validation results are presented, including truly predictive situations. The paper is intended primarily as a tutorial on the method and its applications, rather than as an exposition of research results.  相似文献   

17.
A decision theoretic approach to estimation of unknown random and nonrandom parameters from a linear measurements model is proposed, when the a priori statistics are incomplete and only a small number of data points are available. The unknown statistics are partially characterized by considering two regions in the measurement space, namely, good and bad data regions and constraining the partial probability, the partial covariance, or the combination thereof of the measurements. The random parameter is assumed to be Gaussian variable with known mean and known covariance. Choosing the minimum covariance criterion, the min-max estimator is found to be soft-limiter or tangent type nonlinear function depending upon the a priori statistic available. The estimator for the unknown nonrandom parameter is obtained from the root of some function of the residuals, the function being obtained by minimizing the error covariance. The estimator obtained is similar to a random parameter case.  相似文献   

18.
This paper presents a formulation of the facilities block layout problem which explicitly considers uncertainty in material handling costs on a continuous scale by use of expected values and standard deviations of product forecasts. This formulation is solved using a genetic algorithm meta-heuristic with a flexible bay construct of the departments and total facility area. It is shown that depending on the attitude of the decision-maker towards uncertainty, the optimal design can change significantly. Furthermore, designs can be optimized directly for robustness over a range of uncertainty that is pre-specified by the user. This formulation offers a computationally tractable and intuitively appealing alternative to previous stochastic layout formulations that are based on discrete scenario probabilities.  相似文献   

19.
Uncertainty sampling is an effective method for performing active learning that is computationally efficient compared to other active learning methods such as loss-reduction methods. However, unlike loss-reduction methods, uncertainty sampling cannot minimize total misclassification costs when errors incur different costs. This paper introduces a method for performing cost-sensitive uncertainty sampling that makes use of self-training. We show that, even when misclassification costs are equal, this self-training approach results in faster reduction of loss as a function of number of points labeled and more reliable posterior probability estimates as compared to standard uncertainty sampling. We also show why other more naive methods of modifying uncertainty sampling to minimize total misclassification costs will not always work well.  相似文献   

20.
Numerous frameworks have been proposed in recent years for deductive databases with uncertainty. On the basis of how uncertainty is associated with the facts and rules in a program, we classify these frameworks into implication-based (IB) and annotation-based (AB) frameworks. We take the IB approach and propose a generic framework, called the parametric framework, as a unifying umbrella for IB frameworks. We develop the declarative, fixpoint, and proof-theoretic semantics of programs in our framework and show their equivalence. Using the framework as a basis, we then study the query optimization problem of containment of conjunctive queries in this framework and establish necessary and sufficient conditions for containment for several classes of parametric conjunctive queries. Our results yield tools for use in the query optimization for large classes of query programs in IB deductive databases with uncertainty  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号