首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A practical problem related to the estimation of quantiles in double sampling with arbitrary sampling designs in each of the two phases is investigated. In practice, this scheme is commonly used for official surveys, in which quantile estimation is often required when the investigation deals with variables such as income or expenditure. A class of estimators for quantiles is proposed and some important properties, such as asymptotic unbiasedness and asymptotic variance, are established. The optimal estimator, in the sense of minimizing the asymptotic variance, is also presented. The proposed class contains several known types of estimators, such as ratio and regression estimators, which are of practical application and are therefore derived. Assuming several populations, the proposed estimators are compared with the direct estimator via an empirical study. Results show that a gain in efficiency can be obtained.  相似文献   

2.
In this paper we derive five first-order likelihood-based confidence intervals for a population proportion parameter based on binary data subject to false-positive misclassification and obtained using a double sampling plan. We derive confidence intervals based on certain combinations of likelihood, Fisher-information types, and likelihood-based statistics. Using Monte Carlo methods, we compare the coverage properties and average widths of three new confidence intervals for a binomial parameter. We determine that an interval estimator derived from inverting a score-type statistic is superior in terms of coverage probabilities to three competing interval estimators for the parameter configurations examined here. Utilizing the expressions derived, we also determine confidence intervals for a binary parameter using real data subject to false-positive misclassification.  相似文献   

3.
The estimation of quantiles in two-phase sampling with arbitrary sampling design in each of the two phases is investigated. Several ratio and exponentiation type estimators that provide the optimum estimate of a quantile based on an optimum exponent α are proposed. Properties of these estimators are studied under large sample size approximation and the use of double sampling for stratification to estimate quantiles can also be seen. The real performance of these estimators will be evaluated for the three quartiles on the basis of data from two real populations using different sampling designs. The simulation study shows that proposed estimators can be very satisfactory in terms of relative bias and efficiency.  相似文献   

4.
The paper proposes a practical procedure for obtaining a confidence interval (CI) for the parameter π of the Bernoulli distribution. Let x be the observed number of successes of a random sample of size n from this distribution. The procedure is as follows: use Table 1 to determine whether the given pair (n,x) is a small or a large sample pair. If the small sample situation applies then use Table 2 which gives the Sterne–Crow CI. Otherwise, use the Anscombe CI for which practical formulas are given.  相似文献   

5.
The notion of a confidence interval (CI) is used in the both major sections of the mathematical statistics: parameter estimation and hypothesis testing. While estimating parameters a CI defines the bounds of the parameter estimates with a certain confidence probability, and while testing hypotheses a CI defines with a certain confidence probability an interval of values of the random variable which do not contradict the tested hypothesis about them.  相似文献   

6.
In Balabdaoui, Rufibach, and Wellner (2009), pointwise asymptotic theory was developed for the nonparametric maximum likelihood estimator of a log-concave density. Here, the practical aspects of their results are explored. Namely, the theory is used to develop pointwise confidence intervals for the true log-concave density. To do this, the quantiles of the limiting process are estimated and various ways of estimating the nuisance parameter appearing in the limit are studied. The finite sample size behavior of these estimated confidence intervals is then studied via a simulation study of the empirical coverage probabilities.  相似文献   

7.
Confidence intervals for the population variance and the difference in variances of two populations based on the ordinary t-statistics combined with the bootstrap method are suggested. Theoretical and practical aspects of the suggested techniques are presented, as well as their comparison with existing methods (methods based on Chi-square statistics and F-statistics). In addition, application of presented methods in domain of insurance property data set is described and analyzed. For data from exponential distribution confidence intervals, which are calculated using described methods (based on transformation of the t-statistics and bootstrap technique), give consistent and best coverage in comparison with other methods.  相似文献   

8.
We derive a profile-likelihood confidence interval and a score based confidence interval to estimate the population prevalences, test sensitivities, and test specificities of two conditionally independent diagnostic tests when no gold standard is available. We are motivated by a real-data example on the study of the properties for two fallible diagnostic tests for bovine immunodeficiency virus. We compare the coverage and average width of two new intervals with an interval based on the asymptotic normality of the maximum likelihood estimator and a Bayesian interval estimator via Monte Carlo simulation. We determine that for the parameter configurations considered here, the profile-likelihood, score, and Bayesian intervals all perform adequately in terms of coverage, but overall, the profile-likelihood interval performs best in terms of yielding at least nominal coverage with minimum expected width.  相似文献   

9.
This paper compares three confidence intervals for the difference between two means when the distributions are non-normal and their variances are unknown. The confidence intervals considered are Welch-Satterthwaite confidence interval, the adaptive interval that incorporates a preliminary test (pre-test) of symmetry for the underlying distributions, and the adaptive interval that incorporates the Shapiro-Wilk test for normality as a pre-test. The adaptive confidence intervals use the Welch-Satterthwaite interval if the pre-test fails to reject symmetry (or normality) for both distributions; otherwise, apply the Welch-Satterthwaite confidence interval to the log-transformed data, then transform the interval back. Our study shows that the adaptive interval with pre-test of symmetry has best coverage among the three intervals considered. Simulation studies show that the adaptive interval with pre-test of symmetry performs as well as the Welch-Satterthwaite interval for symmetric distributions. However, for skewed distributions, the adaptive interval with pre-test of symmetry performs better than the Welch-Satterthwaite interval.  相似文献   

10.
The penalized calibration technique in survey sampling combines usual calibration and soft calibration by introducing a penalty term. Certain relevant estimates in survey sampling can be considered as penalized calibration estimates obtained as particular cases from an optimization problem with a common basic structure. In this framework, a case deletion diagnostic is proposed for a class of penalized calibration estimators including both design-based and model-based estimators. The diagnostic compares finite population parameter estimates and can be calculated from quantities related to the full data set. The resulting diagnostic is a function of the residual and leverage, as other diagnostics in regression models, and of the calibration weight, a singular feature in survey sampling. Moreover, a particular case, which includes the basic unit level model for small area estimation, is considered. Both a real and an artificial example are included to illustrate the diagnostic proposed. The results obtained clearly show that the proposed diagnostic depends on the calibration and soft-calibration variables, on the penalization term, as well as on the parameter to estimate.  相似文献   

11.
In the framework of functional gradient descent/ascent, this paper proposes Quantile Boost (QBoost) algorithms which predict quantiles of the interested response for regression and binary classification. Quantile Boost Regression performs gradient descent in functional space to minimize the objective function used by quantile regression (QReg). In the classification scenario, the class label is defined via a hidden variable, and the quantiles of the class label are estimated by fitting the corresponding quantiles of the hidden variable. An equivalent form of the definition of quantile is introduced, whose smoothed version is employed as the objective function, and then maximized by functional gradient ascent to obtain the Quantile Boost Classification algorithm. Extensive experimentation and detailed analysis show that QBoost performs better than the original QReg and other alternatives for regression and binary classification. Furthermore, QBoost is capable of solving problems in high dimensional space and is more robust to noisy predictors.  相似文献   

12.
In this article we derive likelihood-based confidence intervals for the risk ratio using over-reported two-sample binary data obtained using a double-sampling scheme. The risk ratio is defined as the ratio of two proportion parameters. By maximizing the full likelihood function, we obtain closed-form maximum likelihood estimators for all model parameters. In addition, we derive four confidence intervals: a naive Wald interval, a modified Wald interval, a Fieller-type interval, and an Agresti-Coull interval. All four confidence intervals are illustrated using cervical cancer data. Finally, we conduct simulation studies to assess and compare the coverage probabilities and average lengths of the four interval estimators. We conclude that the modified Wald interval, unlike the other three intervals, produces close-to-nominal confidence intervals under various simulation scenarios examined here and, therefore, is preferred in practice.  相似文献   

13.
This work is concerned with the robust resilient control problem for uncertain networked control systems (NCSs) with variable sampling intervals, variant-induced delays and possible data dropouts, which is seldom considered in current literature. It is mainly based on the continuous time-varying-delay system approach. Followed by the nominal case, delay-dependent resilient robust stabilising conditions for the closed-loop NCS against controller gain variations are derived by employing a novel Lyapunov–Krasovskii functional which makes good use of the information of both lower and upper bounds on the varying input delay, and the upper bound on the variable sampling interval as well. A feasible solution of the obtained criterion formulated as linear matrix inequalities can be gotten. A tighter bounding technique is presented for acquiring the time derivative of the functional so as to utilise many more useful elements, meanwhile neither slack variable nor correlated augmented item is introduced to reduce overall computational burden. Two examples are given to show the effectiveness of the proposed method.  相似文献   

14.
Geometric quantiles are investigated using data collected from a complex survey. Geometric quantiles are an extension of univariate quantiles in a multivariate set-up that uses the geometry of multivariate data clouds. A very important application of geometric quantiles is the detection of outliers in multivariate data by means of quantile contours. A design-based estimator of geometric quantiles is constructed and used to compute quantile contours in order to detect outliers in both multivariate data and survey sampling set-ups. An algorithm for computing geometric quantile estimates is also developed. Under broad assumptions, the asymptotic variance of the quantile estimator is derived and a consistent variance estimator is proposed. Theoretical results are illustrated with simulated and real data.  相似文献   

15.
The comparison of classification accuracy statements has generally been based upon tests of difference or inequality when other scenarios and approaches may be more appropriate. Procedures for evaluating two scenarios with interest focused on the similarity in accuracy values, non-inferiority and equivalence, are outlined following a discussion of tests of difference (inequality). It is also suggested that the confidence interval of the difference in classification accuracy may be used as well as or instead of conventional hypothesis testing to reveal more information about the disparity in the classification accuracy values compared.  相似文献   

16.
We revisit the problem of determining confidence interval widths for the comparison of means. For the independent two-sample (two-sided) case, Goldstein and Healy (1995) draw attention to the fact that comparisons based on 95% error bars are not very effective in assessing the statistical significance of the difference in means and derive the correct confidence interval for such a comparison. We provide an extension to Goldstein and Healy (1995) to account for the correlation structure and unequal variances. We use the results to develop rules of thumb for evaluating differences, in an exploratory manner, like Moses (1987) and Cumming (2009), from the independent case. We illustrate the method for the simple comparison of two means in a real data set, provide R code that may be easily implemented in practice, and discuss the extension of the method to other applied problems.  相似文献   

17.
We obtain confidence intervals for willingness-to-pay (WTP) measures derived from a mode choice model estimated to analyse travel demand for suburban trips in the two main interurban corridors in Gran Canaria using a mixed RP/SP data base. We considered a specification of the systematic utility that incorporates income effect, and interactions among socioeconomic variables and level-of-service attributes, as well as between travel cost and frequency. As our model provides rather complex expressions of the marginal utilities, we simulated the distribution of the WTP (in general, unknown) from a multivariate Normal distribution of the parameter vector. For every random draw of the parameter vector, the corresponding simulation of the WTP was obtained applying the sample enumeration method to the individuals in the RP database. The extremes of the confidence interval were determined by the percentiles on this distribution. After trying different simulation strategies we observed that the size of the intervals was strongly affected by the outliers as well as by the magnitude of the simulated parameters. In all cases examined the simulated distribution of the corresponding WTP measure presents an asymmetric shape that was very similar for the two model specifications considered. This is consistent with previous findings using a radically different approach. We also observed that the upper extreme of the confidence interval for the value of time in private transport was very unstable for different numbers of random draws.  相似文献   

18.
针对现有垃圾书签检测方法在用户概貌信息较少情况下检测性能下降的问题,提出一种融入可信度的集成SVM垃圾书签检测方法.首先基于Bootstrap技术对训练样本进行可重复采样,得到个体SVM的训练子集,然后将SVM的标准输出直接拟合Sigmoid函数得到SVM的后验概率输出,作为类别输出的可信度,并提出一种性能优于投票策略的融入可信度的融合方法对个体SVM的输出结果进行融合.实验结果表明,该方法在用户概貌信息较少的情况下具有较好的检测性能.  相似文献   

19.
In this paper, the performance of the Finite Cell Method is studied for nearly incompressible finite strain plasticity problems. The Finite Cell Method is a combination of the fictitious domain approach with the high-order Finite Element Method. It provides easy mesh generation capabilities for highly complex geometries; moreover, this method offers high convergence rates, the possibility to overcome locking and robustness against high mesh distortions. The performance of this method is numerically investigated based on computations of benchmark and applied problems. The results are also verified with the h- and p-version Finite Element Method. It is demonstrated that the Finite Cell Method is an appropriate simulation tool for large plastic deformations of structures with complex geometries and microstructured materials, such as porous and cellular metals that are made up of ductile materials obeying nearly incompressible J2 theory of plasticity.  相似文献   

20.
In this paper, we study the scheduling problem of jobs with multiple active intervals. Each job in the problem instance has disjoint active time intervals where it can be executed and a workload characterized by the required number of CPU cycles. Previously, people studied multiple interval job scheduling problem where each job must be assigned enough CPU cycles in one of its active intervals. We study a different practical version where the partial work done by the end of an interval remains valid and each job is considered finished if total CPU cycles assigned to it in all its active intervals reach the requirement. The goal is to find a feasible schedule that minimizes energy consumption. By adapting the algorithm for single interval jobs proposed in Yao, Demers and Shenker (1995) [1], one can still obtain an optimal schedule. However, the two phases in that algorithm (critical interval finding and scheduling the critical interval) can no longer be carried out directly. We present polynomial time algorithms to solve the two phases for jobs with multiple active intervals and therefore can still compute the optimal schedule in polynomial time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号