首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Bechhofer and Kulkarni (1982) proposed procedures for selecting that one of k Bernoulli populations with the largest single trial success probability. They showed that their procedure for k = 2 minimizes the expected total sample size amongst a class of procedures, all of which attain the same probability of correct selection. Kulkarni and Jennison (1986) generalized this result to the case k ≥ 3. In this article we prove the stronger result that the Bechhofer-Kulkarni procedure for each k≥2 stochastically minimizes the distribution of sample size amongst procedures in the same class. That is, the distribution of sample size for the Bechhofer-Kulkarni procedure is the same as or stochastically smaller than that for any other procedure in the class.  相似文献   

2.
We introduce a new family of sequential selection procedures wherein the subsets selected have random sizes. In comparison to subset selection procedures that select subsets of fixed size, the new procedures alleviate the need to specify the subset size prior to the experiment. We discuss the application of such procedures in the context of early phase clinical trials. The new procedures retain the adaptive features of the Levin-Robbins-Leu family of sequential subset selection procedures for selecting subsets of fixed size, namely, sequential elimination of inferior treatments and sequential recruitment of superior treatments. These two adaptive features respectively address ethical concerns that diminish interest in nonadaptive procedures and also allow promising treatments to be brought forward for further testing without having to wait until the end of the trial. The new procedures differ from the classical subset selection procedures of Shanti S. Gupta in terms of their respective goals and operating characteristics and we compare the two approaches in a simulation study. The findings suggest that whereas Gupta’s procedure achieves its goal of including the single best treatment in the final selected subset with high probability, it does so by virtue of a nonadaptive, fixed sample size procedure that lacks necessary flexibility in the context of clinical research. By contrast, the new procedures aim to select treatment subsets that satisfy a different criterion, that of acceptable subset selection with high probability, while allowing adaptive elimination and recruitment and other flexibilities which we discuss to fit the practical needs of selection methods in clinical research.  相似文献   

3.
Abstract

We state a general formula that provides a lower bound for the probability of various types of acceptable subset selection with the Levin–Robbins–Leu binomial subset selection procedure without elimination or recruitment. We prove the truth of a conjecture of Bechhofer, Kiefer, and Sobel for this procedure by applying the general lower bound. We also introduce a simple modification that allows sequential elimination of inferior populations and recruitment of superior populations. Numerical evidence indicates that the new procedure also obeys the general lower bound while reducing the expected number of observations and failures compared with nonadaptive methods.  相似文献   

4.
Abstract

In this article, we consider a variety of inference problems for high-dimensional data. The purpose of this article is to suggest directions for future research and possible solutions about p ? n problems by using new types of two-stage estimation methodologies. This is the first attempt to apply sequential analysis to high-dimensional statistical inference ensuring prespecified accuracy. We offer the sample size determination for inference problems by creating new types of multivariate two-stage procedures. To develop theory and methodologies, the most important and basic idea is the asymptotic normality when p → ∞. By developing asymptotic normality when p → ∞, we first give (a) a given-bandwidth confidence region for the square loss. In addition, we give (b) a two-sample test to assure prespecified size and power simultaneously together with (c) an equality-test procedure for two covariance matrices. We also give (d) a two-stage discriminant procedure that controls misclassification rates being no more than a prespecified value. Moreover, we propose (e) a two-stage variable selection procedure that provides screening of variables in the first stage and selects a significant set of associated variables from among a set of candidate variables in the second stage. Following the variable selection procedure, we consider (f) variable selection for high-dimensional regression to compare favorably with the lasso in terms of the assurance of accuracy and the computational cost. Further, we consider variable selection for classification and propose (g) a two-stage discriminant procedure after screening some variables. Finally, we consider (h) pathway analysis for high-dimensional data by constructing a multiple test of correlation coefficients.  相似文献   

5.
From K(≧2) independent normal populations, we wish to select the one associted with the largest mean, assuming that the comman variance is unknown. We adopt the “indifference Zone” approach of Bechhofer (1954) and propose an accelerated version of the purely sequential procedure of Robbins et al. (1968). Asymptotic second order expansions for the probability of correct selection and other characteristices of this modified rule are provided for arbitrary K. We discuss both small and moderate sample size performances of our stopping time via computer simlations and note that the accelearted version can save considearable amout of sampling operations, yet it can be very competitive with the classical sampling procedure of Robbins et al. (1968).  相似文献   

6.
Abstract

In this article on sequential adaptive testing, we have studied the optimal allocation between two populations for testing a composite hypothesis involving the parameters, with the goal of decreasing allocation of one of the treatments to the order of the logarithm of the sample size while decreasing the probability of incorrect selection to zero. We have proved the result for large sample sizes both mathematically and by simulation studies.  相似文献   

7.
Abstract

We study a key inequality that implies the lower bound formula for the probability of correct selection and other selection-related events of interest in the Levin-Robbins-Leu family of sequential binomial subset selection procedures. We present a strategy for the proof of the key inequality, and a mostly complete general proof is given. The strategy provides an entirely complete and rigorous proof of the inequality for as many as seven competing populations using computer-assisted symbolic manipulation.  相似文献   

8.
Abstract

We consider the problem of partitioning a set of given normal populations, with respect to a control population, into two subsets according to their unknown means. In this article, using the subset selection approach in the first stage and indifference zone approach in the second stage, we propose a two-stage procedure. The procedure partitions “too superior or inferior” treatments after the first stage, thereby reducing the average sample size and making the procedure more attractive for practitioners. The proposed procedure is studied and compared via the Monte Carlo simulation studies with other competitive procedures known in the literature.  相似文献   

9.
Techniques of Armitage (1958) for finding confidence intervals after sequential tests (SCI) are applied to curtailed binomial test boundaries. The form of the exact randomized SCI is given. We also show that the conservative confidence interval calculated as though a fixed sample size procedure had been used (FCI) remains conservative when used after stopping on a curtailed boundary. Numerical results are obtained to assess the potential gains that may be obtained by using the conservative SCI or exact SCI over the conservative FCI for these boundaries.  相似文献   

10.
Abstract

Performance guarantees for multinomial selection procedures are usually derived by finding the least favorable configuration (LFC)—the one for which the probability of correct selection is minimum outside the indifference zone—and then evaluating the procedure on that configuration. The slippage configuration has been proved to be the LFC for several procedures and has been conjectured to be the worst for some other procedures. The principal result of this article unifies and extends all previous results for two alternatives: the slippage configuration is the worst for all procedures that have a finite expected number of trials and always select the alternative with more successes. A generalization of the key inequality in the proof to an arbitrary number of alternatives is conjectured.  相似文献   

11.
Abstract

Some ranking and selection (R&S) procedures for steady-state simulation require estimates of the asymptotic variance parameter of each system to guarantee a certain probability of correct selection. In this paper, we show that the performance of such R&S procedures depends highly on the quality of the variance estimates that are used. In fact, we study the performance of R&S procedures using three new variance estimators—overlapping area, overlapping Cramér–von Mises, and overlapping modified jackknifed Durbin–Watson estimators—that show better long-run performance than other estimators previously used in conjunction with R&S procedures for steady-state simulations.  相似文献   

12.
13.
Jun Li 《Sequential Analysis》2013,32(4):475-487
Abstract

Estimation of the offset between two network clocks has received a lot of attention in the literature, with the motivating force being data networking applications that require synchronous communication protocols. Statistical modeling techniques have been used to develop improved estimation algorithms, with a recent development being the construction of a confidence interval based on a fixed sample size. Lacking in the fixed sample size confidence interval procedures is a useable relationship between sample size and the width of the resulting confidence interval. Were that available, an optimum sample size could be determined to achieve a specified level of precision in the estimator and thereby improve the efficiency of the estimation procedure by reducing unnecessary overhead in the network that is associated with collecting the data used by the estimation schemes. A fixed sample size confidence interval that has a prescribed width is not available for this problem. However, in this paper we develop and compare alternative sequential intervals with fixed width and demonstrate that an effective solution is available.  相似文献   

14.
Abstract

In sequential analysis, investigation of stopping rules is important, as they govern the sampling cost and derivation and accuracy of frequentist inference. We study stopping rules in sampling from a population comprised of an unknown number of classes where all classes are equally likely to occur in each selection. We adopt Blackwell's criterion for a “more informative experiment” to compare stopping rules in our context and derive certain complete class results, which provide some guidance for selecting a stopping rule. We show that it suffices to let the stopping probability, at any time, depend only on the number of selections and the number of discovered classes up to that time. A more informative stopping rule costs a higher expected sample size, and conversely, any given stopping rule can be improved with an increment in expected sample size. Admissibility within all stopping rules with a uniform upper bound on average sample size is also discussed. Any fixed-sample-size rule is shown to be admissible within an appropriate class. Finally, we show that for the minimal sufficient statistic to be complete, which is useful for unbiased estimation, the stopping rule must be nonrandomized.  相似文献   

15.
In a multinomial setting with a fixed number k of cells. the problem of screening out cells to find the "best" cell, i.e., the one with the smallest cell probability, or looking for a (small) subset of cells containing the best cell is revisited. An inverse sampling procedure is used, unlike past work on this problem ([l], [2], [3], and [4]). Finding the cell with the smallest cell probability is clearly more difficult than finding the one with the argest cell probability. The proposed procedure takes one observation at a time (as usual) and igns a zero to all those (and only those) k - 1 cells into which the observation does not fall Sampling continues sequentially and stops as soon as any one cell has accumulated r zeros.

For any given integer c (with 0 ≤ c < r), we put into the selected subset (SS) all those cells with at least r - c zeros and assert that this selected subset contains the best cell. It is important to note that for the slippage configuration (SC) we can attain any specified lower bound P? for the probability P(SCB) that the SS contains the best cell by increasing r and need not increase the value of c. Of principal interest is the case c = 0; the reason is that (i) for c = 0 the procedure is somewhat inore efficient as will be apparent later. especially after viewing the tabled results and (ii) for c = 0 our procedure never selects a subset containing all the cells. Using the SC we determine the smallest alue of r that satisfies a preassigned lower bound P? for P(SCB). Two different is of a correct selection are considered, both related to (but stinct from) the probability P(SCB) that the SS contains the best cell. The results of this new procedure are numerically mpared with those in the references cited above using randomization to make the comparisons fair nd reasonable. If the other procedure is a fixed sample size procedure using N observations, en we wish to randomize between some r – 1 and the next integer r so that the resulting (N) for a proposed procedure will be (exactly) equal to the N-value for the other procedure. The proposed SAML (selecting among the omial losers) procedure turns out to be more efficient d, for at least one of the criteria, uniformly more efficient for all values of the specified parameters. Later we will make the conjecture based on numerical evidence that under the ive model (which was used in [1] for selecting the cell with the smallest cell probability) the SC is least favorable (LFC).  相似文献   

16.
Abstract

Suppose observations are taken sequentially from a multinomial distribution with k cells until the count in one of the cells reaches a predetermined number M. With a view to select the cell with the largest probability, we select the cell that has the largest count M. The problem of testing if the selected multinomial cell is the best is being considered in this paper. We propose the test procedure and show that the supremum of the probability of error for our procedure can be written as a single integral involving the gamma distribution. Exact values of the supremum of the probability of error are tabulated and its approximation formula for large values of M is provided.  相似文献   

17.
Consider a fixed sample size uniformly most powerful test of Ho: p ≤ po versus H1 : p > po, where p is a Bernoulli parameter. If the test is non randomized, an optimally curtailed version stops sampling as soon as the final decision is certain. If the test is andomized, the optimal curtailment procedure is more complex. This paper gives a Simple, complete characterization of weakly admissible curtailments of such randomized tests.  相似文献   

18.
Abstract

In early phase cancer clinical trials where toxicity events follow independent and identical Bernoulli distributions indexed by patients, the Bayesian stopping rule has been used for continuous monitoring of toxicity along with an affordable maximum sample size (N). This article studies some properties of an heuristic procedure where the trial will stop at the first time that the posterior probability that the toxicity rate (p) is greater than a threshold (η) is greater than certain probability threshold (τ). Specifically, we study the pattern formed by stopping times and regions, recursive stopping probability computation, and toxicity rate estimation. Some relevant theoretical results are given. The presented results are potentially useful for guiding toxicity clinical trial designs.  相似文献   

19.
Abstract

We investigate in this paper an optimal stopping problem where two decision makers are involved in the selection of a single offer. Suppose that n offers are examined one at a time by both decision makers. At each stage, a decision must be taken: accept the current offer and stop the selection process, or discard it and examine the next offer. We assume that no recall of previously examined offers is allowed. A conflict arises when one decision maker decides to accept a currently inspected offer and the second decides to discard it. In such conflicting situations, a decision should be taken by defining a stopping rule for the group. We propose to stop the process if either decision maker accepts an offer. We develop the dynamic programming approach for this problem and state the optimal strategy for a fixed utility. Then, we propose an experimental investigation of the problem.  相似文献   

20.
Abstract

In this article, we consider a test of the sphericity for high-dimensional covariance matrices. We produce a test statistic by using the extended cross-data-matrix (ECDM) methodology. We show that the ECDM test statistic is based on an unbiased estimator of a sphericity measure. In addition, the ECDM test statistic enjoys consistency properties and the asymptotic normality in high-dimensional settings. We propose a new test procedure based on the ECDM test statistic and evaluate its asymptotic size and power theoretically and numerically. We give a two-stage sampling scheme so that the test procedure can ensure a prespecified level both for the size and power. We apply the test procedure to detect divergently spiked noise in high-dimensional statistical analysis. We analyze gene expression data by the proposed test procedure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号