首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The construction of bootstrap hypothesis tests can differ from that of bootstrap confidence intervals because of the need to generate the bootstrap distribution of test statistics under a specific null hypothesis. Similarly, bootstrap power calculations rely on resampling being carried out under specific alternatives. We describe and develop null and alternative resampling schemes for common scenarios, constructing bootstrap tests for the correlation coefficient, variance, and regression/ANOVA models. Bootstrap power calculations for these scenarios are described. In some cases, null-resampling bootstrap tests are equivalent to tests based on appropriately constructed bootstrap confidence intervals. In other cases, particularly those for which simple percentile-method bootstrap intervals are in routine use such as the correlation coefficient, null-resampling tests differ from interval-based tests. We critically assess the performance of bootstrap tests, examining size and power properties of the tests numerically using both real and simulated data. Where they differ from tests based on bootstrap confidence intervals, null-resampling tests have reasonable size properties, outperforming tests based on bootstrapping without regard to the null hypothesis. The bootstrap tests also have reasonable power properties.  相似文献   

2.
This paper considers the decision-making problem of selecting a strategy from a set of alternatives on the basis of incomplete information (e.g. a finite number of observations). At any time the system can adopt a particular strategy or decide to gather additional information at some cost. Balancing the expected utility of the new information against the cost of acquiring the information is the central problem that the authors address. In the authors' approach, the cost and utility of applying a particular strategy to a given problem are represented as random variables from a parametric distribution. By observing the performance of each strategy on a randomly selected sample of problems, one can use parameter estimation techniques to infer statistical models of performance on the general population of problems. These models can then be used to estimate: (1) the utility and cost of acquiring additional information and (2) the desirability of selecting a particular strategy from a set of choices. Empirical results are presented that demonstrate the effectiveness of the hypothesis evaluation techniques for tuning system parameters in a NASA antenna scheduling application  相似文献   

3.
Power and sample size determination has been a challenging issue for multiple testing procedures, especially stepwise procedures, mainly because (1) there are several power definitions, (2) power calculation usually requires multivariate integration involving order statistics, and (3) expansion of these power expressions in terms of ordinary statistics, instead of order statistics, is generally a difficult task. Traditionally power and sample size calculations rely on either simulations or some recursive algorithm; neither is straightforward and computationally economic. In this paper we develop explicit formulas for minimal power and r-power of stepwise procedures as well as complete power of single-step procedures for exchangeable and non-exchangeable bivariate and trivariate test statistics. With the explicit power expressions, we were able to directly calculate the desired power, given sample size and correlation. Numerical examples are presented to illustrate the relationship among power, sample size and correlation.  相似文献   

4.
Statistical properties of software pseudorandom number generators are tested based on uniform distribution on a unit hypercube with dimensions from 1 to 15. Some CLHEP generators, the Mersenne Twister generator and the MCNP generator, are tested. Parts of the pseudorandom number sequences with poor statistical properties are found. Easy-to-rectify flaws of two CLHEP generators are detected.  相似文献   

5.
Conclusion The intelligent hypothesis testing system SVH has been implemented in an integrated CASE APS environment for software system development by structured-modular composition programming [13]. The use of CASE APS for this purpose has proved highly productive and promising, because this support system ensures software development along the life cycle spiral, i.e., it supports all technological processes from software design to operation and upgrading. These support tools are intended for applied programmers, and are sufficiently simple to learn and use. The CASE APS environment also provides support tools for efficient implementation of knowledge and data bases for the constructed applications, including SVH. Translated from Kibernetika i Sistemnyi Analiz, No. 5, pp. 50–58, September–October, 1997.  相似文献   

6.
Fuzzy sets and fuzzy state modeling require modifications of fundamental principles of statistical estimation and inference. These modifications trade increased computational effort for greater generality of data representation. For example, multivariate discrete response data of high (but finite) dimensionality present the problem of analyzing large numbers of cells with low event counts due to finite sample size. It would be useful to have a model based on an invariant metric to represent such data parsimoniously with a latent “smoothed” or low dimensional parametric structure. Determining the parameterization of such a model is difficult since multivariate normality (i.e., that all significant information is represented in the second order moments matrix), an assumption often used in fitting the most common types of latent variable models, is not appropriate. We present a fuzzy set model to analyze high dimensional categorical data where a metric for grades of membership in fuzzy sets is determined by latent convex sets, within which moments up to order J of a discrete distribution can be represented. The model, based on a fuzzy set parameterization, can be shown, using theorems on convex polytopes [1], to be dependent on only the enclosing linear space of the convex set. It is otherwise measure invariant. We discuss the geometry of the model's parameter space, the relation of the convex structure of model parameters to the dual nature of the case and variable spaces, how that duality relates to describing fuzzy set spaces, and modified principles of estimation.  相似文献   

7.
根据硅渣形成的机理及数据特点,利用统计分析的假设检验来增加数据的可靠性,并结合长期积累的专家经验知识以及采用产生式规则知识的表示形式,建立了基于主元分析的数据分类专家系统。然后针对硅渣机理的特点,构造出神经网络结构来预测硅渣的准确值。最后将该系统用计算机实现仿真实验,仿真结果表明,本文提出的基于假设检验和专家系统数据预处理的神经网络预测方法是一种可行的数据预测策略。  相似文献   

8.
A computer-aided statistical procedure based on standard measures of central tendency, dispersion and correlations was developed to identify meaningful compensable factors and scales in a point job evaluation plan. The procedure specifically examines the degree of field overlapping between factors, their ability to be reliable wage indicators, as well as their ability to discriminate among jobs. To facilitate the execution of the statistical procedure a computer program was written in Visual Basic. The use of the proposed methodology was demonstrated by means of a case or sample problem.  相似文献   

9.
We propose an optimization approach to the estimation of a simple closed curve describing the boundary of an object represented in an image. The problem arises in a variety of applications, such as template matching schemes for medical image registration. A regularized optimization formulation with an objective function that measures the normalized image contrast between the inside and outside of a boundary is proposed. Numerical methods are developed to implement the approach, and a set of simulation studies are carried out to quantify statistical performance characteristics. One set of simulations models emission computed tomography (ECT) images; a second set considers images with a locally coherent noise pattern. In both cases, the error characteristics are found to be quite encouraging. The approach is highly automated, which offers some practical advantages over currently used technologies in the medical imaging field  相似文献   

10.
11.
There are few methods in the literature to test for integration and cointegration in the traditional framework, i.e. using the I(0)–I(1) paradigm. In the first case, the most known are the Dickey–Fuller (DF), the Augmented Dickey–Fuller (ADF) and the Phillips–Perron (PP) tests, while in the latter case, the Engle and Granger (EG) and Johansen procedures are broadly used. But how well do these methods perform when the underlying process presents the long-memory characteristic? The bootstrap technique is used here to approximate the distribution of integration and cointegration test statistics based on a semiparametric estimator of the fractional parameter of ARFIMA(p,d,q) models. The proposed bootstrap tests, along with the asymptotic test based on the fractional semiparametric estimator, are empirically compared to the standard tests, for testing integration and cointegration in the long-memory context. Monte Carlo simulations are performed to evaluate the size and power of the tests. The results show that the conventional tests, except for the procedures based on the DF approach, loose power when compared to fractional tests. As an illustration, the tests were applied to the series of Ibovespa (Brazil) and Dow Jones (USA) indexes and led to the conclusion that these series do not share a long-run relationship.  相似文献   

12.
13.
The Birnbaum-Saunders distribution has been used quite effectively to model times to failure for materials subject to fatigue and for modeling lifetime data. In this paper we obtain asymptotic expansions, up to order n−1/2 and under a sequence of Pitman alternatives, for the non-null distribution functions of the likelihood ratio, Wald, score and gradient test statistics in the Birnbaum-Saunders regression model. The asymptotic distributions of all four statistics are obtained for testing a subset of regression parameters and for testing the shape parameter. Monte Carlo simulation is presented in order to compare the finite-sample performance of these tests. We also present two empirical applications.  相似文献   

14.
In this paper we propose to solve a range of computational imaging problems under a unified perspective of a regularized weighted least-squares (RWLS) framework. These problems include data smoothing and completion, edge-preserving filtering, gradient-vector flow estimation, and image registration. Although originally very different, they are special cases of the RWLS model using different data weightings and regularization penalties. Numerically, we propose a preconditioned conjugate gradient scheme which is particularly efficient in solving RWLS problems. We provide a detailed analysis of the system conditioning justifying our choice of the preconditioner that improves the convergence. This numerical solver, which is simple, scalable and parallelizable, is found to outperform most of the existing schemes for these imaging problems in terms of convergence rate.  相似文献   

15.
Functional Size Measurement (FSM) methods are intended to measure the size of software by quantifying the functional user requirements of the software. The capability to accurately quantify the size of software in an early stage of the development lifecycle is critical to software project managers for evaluating risks, developing project estimates and having early project indicators. In this paper, we present OO-Method Function Points (OOmFP), which is a new FSM method for object-oriented systems that is based on measuring conceptual schemas. OOmFP is presented following the steps of a process model for software measurement. Using this process model, we present the design of the measurement method, its application in a case study, and the analysis of different evaluation types that can be carried out to validate the method and to verify its application and results.  相似文献   

16.
17.
This paper presents a thorough study of gender classification methodologies performing on neutral, expressive and partially occluded faces, when they are used in all possible arrangements of training and testing roles. A comprehensive comparison of two representation approaches (global and local), three types of features (grey levels, PCA and LBP), three classifiers (1-NN, PCA + LDA and SVM) and two performance measures (CCR and d′) is provided over single- and cross-database experiments. Experiments revealed some interesting findings, which were supported by three non-parametric statistical tests: when training and test sets contain different types of faces, local models using the 1-NN rule outperform global approaches, even those using SVM classifiers; however, with the same type of faces, even if the acquisition conditions are diverse, the statistical tests could not reject the null hypothesis of equal performance of global SVMs and local 1-NNs.  相似文献   

18.
19.
There is wide agreement on the need for a markup standard for encoding literary texts. The Standard Generalized Markup Language (SGML) seems to provide the best basis for such a standard. But two problems inhibit the acceptance of SGML for this purpose. (1) Computer-assisted textual studies often require the maintenance of multiple views of a document's structure, but SGML is not designed to accommodate such views. (2) An SGML-based standard would appear to entail the keyboarding of more markup than researchers are accustomed to, or are likely to accept. We discuss five ways of dealing with the first problem, and several ways of reducing the burden of markup. We conclude that the problem of maintaining multiple views can be surmounted, though with some difficulty, and that the markup required for an SGML-based standard can be reduced to a level comparable to that of other markup schemes currently in use.Ron Hayter is the Senior Software Developer of Software Exoterica Corporation, Ottawa, Canada.Maria Karibaba obtained the M.Sc. degree in Computing and Information Science at Queen's University and returned to Greece.George Logan is Head of the Department of English at Queen's University, Canada.John McFadden is the President of Software Exoterica Corporation, Ottawa, CanadaDavid Barnard is Head of the Department of Computing and Information Science at Queen's University, Canada.This paper has benefited greatly from comments provided by the referees.  相似文献   

20.
In the last decade, many researchers have devoted considerable effort to the problem of image restoration. However, no recent study has been undertaken for a comparative evaluation of these techniques under conditions where a user may have different kinds of a priori information about the ideal image. To this effect, we briefly survey some recent techniques and compare the performance of a linear space-invariant (LSI) maximum a posteriori (MAP) filter, an LSI reduced update Kalman filter (RUKF), an edge-adaptive RUKF, and an adaptive convex-type constraint-based restoration implemented via the method of projection onto convex sets (POCS). The mean square errors resulting from the LSI algorithms are compared with that of the finite impulse response Wiener filter, which is the theoretical limit in this case. We also compare the results visually in terms of their sharpness and the appearance of artifacts. As expected, the space-variant restoration methods which are adaptive to local image properties obtain the best results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号