首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Dunnett and Tamhane [Dunnett, C.W., Tamhane, A.C., 1992. A step-up multiple test procedure. J. Amer. Statist. Assoc. 87, 162-170.] proposed a step-up procedure for comparing k treatments with a control and showed that the step-up procedure is more powerful than its counterpart single step and step-down procedures. Since then, several modified step-up procedures have been suggested to deal with different testing environments. In order to establish those step-up procedures, it is necessary to derive approaches for evaluating the joint distribution of the order statistics. In some cases, experimenters may have difficulty in applying those step-up procedures in multiple hypothesis testing because of the computational limitation of existing algorithms in evaluating the critical values for a large number of multiple comparisons. As a result, most procedures are only workable when the design of the experiment is balanced with k≤20 or unbalanced with k≤8. In this paper, new algorithms are proposed in order to effectively compute the joint distribution of order statistics in various situations. An extensive numerical study shows that the proposed algorithms can easily handle the testing situations with a much larger k. Examples of applying the proposed algorithms to evaluate the critical values of two existing step-up procedures are also presented.  相似文献   

2.
In this paper, we deal with the problem of computing the digital fundamental group of a closed k-surface by using various properties of both a (simple) closed k-surface and a digital covering map. To be specific, let be a simple closed ki-curve with li elements in Zni, i∈{1,2}. Then, the Cartesian product is not always a closed k-surface with some k-adjacency of Zn1+n2. Thus, we provide a condition for to be a (simple) closed k-surface with some k-adjacency depending on the ki-adjacency, i∈{1,2}. Besides, even if is not a closed k-surface, we show that the k-fundamental group of can be calculated by both a k-homotopic thinning and a strong k-deformation retract.  相似文献   

3.
In multiple hypotheses testing, it is important to control the probability of rejecting “true” null hypotheses. A standard procedure has been to control the family-wise error rate (FWER), the probability of rejecting at least one true null hypothesis.For large numbers of hypotheses, using FWER can result in very low power for testing single hypotheses. Recently, powerful multiple step FDR procedures have been proposed which control the “false discovery rate” (expected proportion of Type I errors). More recently, van der Laan et al. [Augmentation procedures for control of the generalized family-wise error rate and tail probabilities for the proportion of false positives. Statist. Appl. in Genetics and Molecular Biol. 3, 1-25] proposed controlling a generalized family-wise error rate k-FWER (also called gFWER(k)), defined as the probability of at least (k+1) Type I errors (k=0 for the usual FWER).Lehmann and Romano [Generalizations of the familywise error rate. Ann. Statist. 33(3), 1138-1154] suggested both a single-step and a step-down procedure for controlling the generalized FWER. They make no assumptions concerning the p-values of the individual tests. The step-down procedure is simple to apply, and cannot be improved without violation of control of the k-FWER.In this paper, by limiting the number of steps in step-down or step-up procedures, new procedures are developed to control k-FWER (and the proportion of false positives) PFP. Using data from the literature, the procedures are compared with those of Lehmann and Romano [Generalizations of the familywise error rate. Ann. Statist. 33(3), 1138-1154], and, under the assumption of a multivariate normal distribution of the test statistics, show considerable improvement in the reduction of the number and PFP.  相似文献   

4.
It is proved that the family of recognizable N-subsets is not closed under the operation sup, and that there exists even a DOL length sequence x0, x1, … such that, for any k,xi ? xi+1 ? … ? xi+k holds true for some i and the cardinality of the set {n ∈ N|xn > xn+1} is infinite.  相似文献   

5.
In this paper, some new lattices of fuzzy substructures are constructed. For a given fuzzy set μ in a group G, a fuzzy subgroup S(μ) generated by μ is defined which helps to establish that the set Ls of all fuzzy subgroups with sup property constitutes a lattice. Consequently, many other sublattices of the lattice L of all fuzzy subgroups of G like , etc. are also obtained. The notion of infimum is used to construct a fuzzy subgroup i(μ) generated by a given fuzzy set μ, in contrast to the usual practice of using supremum. In the process a new fuzzy subgroup i(μ) is defined which we shall call a shadow fuzzy subgroup of μ. It is established that if μ has inf property, then i(μ) also has this property.  相似文献   

6.
Comparison of k treatment means under the simple-order assumption (μ1μ2≤?≤μk) is considered. Under an order restriction, isotonic estimates and global tests of homogeneity have been known for several decades. Recently, some multiple-comparison techniques have been proposed, but none has become the standard. In this article, we develop multiple-comparison and clustering techniques for simply ordered means using a Bayesian hierarchical model. We parameterize each individual mean in terms of the difference between the preceding mean and itself. Estimates for such difference parameters are obtained using the MCMC method. Pairwise comparisons and determination of the most probable ordered clustering are based on the posterior probabilities of the difference parameters being zero. Numerical examples are given.  相似文献   

7.
Let S be a set of n horizontal and vertical segments on the plane, and let s, t ∈ S. A Manhattan path (of length k) from s to t is an alternating sequence of horizontal and vertical segments, s = r0, r1,…,rk = t, such that ri intersects ri+1, 0 ? i < k. An O(n log n) time O(n log n) space algorithm is presented which, given S and t, finds a tree of shortest Manhattan paths from all s ∈ S to t. The algorithm relies on a new data structure which makes it possible to find in O(log n + p) time all p segments currently in S which intersect a given s ∈ S, and which support a deletion of any segment from S in O(log n) time, where we assume that the cost of these operations is accumulated over the whole algorithm. The structure makes use of the recently discovered Gabow and Tarjan's linear time version of the union-find algorithm on consecutive sets. We prove an Ω(n log n) lower bound on the complexity of deciding whether there is a Manhattan path between two given segments, under the linear decision tree model. Finally, some applications of the Manhattan path algorithm are indicated.  相似文献   

8.
Suppose independent observations Xi, i=1,…,n are observed from a mixture model , where λ is a scalar and Q(λ) is a nondegenerate distribution with an unspecified form. We consider to estimate Q(λ) by nonparametric maximum likelihood (NPML) method under two scenarios: (1) the likelihood is penalized by a functional g(Q); and (2) Q is under a constraint g(Q)=g0. We propose a simple and reliable algorithm termed VDM/ECM for Q-estimation when the likelihood is penalized by a linear functional. We show this algorithm can be applied to a more general situation where the penalty is not linear, but a function of linear functionals by a linearization procedure. The constrained NPMLE can be found by penalizing the quadratic distance |g(Q)-g0|2 under a large penalty factor γ>0 using this algorithm. The algorithm is illustrated with two real data sets.  相似文献   

9.
A problem in the earth sciences is to reduce a sequence of observation vectors X1, X2, …, XN to a set of internally homogeneous segments or zones. This paper uses the model that the observation vectors in the ith zone are a random sample from the multivariate normal N(ξi, Σi) distribution. It is demonstrated that the maximum likelihood estimates of the boundaries between zones may be determined by dynamic programming, and a FORTRAN algorithm to perform this estimation is given.  相似文献   

10.
11.
Unlike the connected sum in classical topology, its digital version is shown to have some intrinsic feature. In this paper, we study both the digital fundamental group and the Euler characteristic of a connected sum of digital closed ki-surfaces, i∈{0,1}.  相似文献   

12.
In computer aided verification, the reachability problem is particularly relevant for safety analyses. Given a regular tree language L, a term t and a relation R, the reachability problem consists in deciding whether there exist a positive integer n and terms t0,t1,…,tn such that t0L, tn=t and for every 0?i<n, (ti,ti+1)∈R. In this case, the term t is said to be reachable, otherwise it is said unreachable. This problem is decidable for particular kinds of relations, but it is known to be undecidable in general, even if L is finite. Several approaches to tackle the unreachability problem are based on the computation of an R-closed regular language containing L. In this paper we show a theoretical limit to this kind of approaches for this problem.  相似文献   

13.
《Computers & Structures》2002,80(7-8):643-658
This paper is concerned with the application of a coarse preconditioner, the generalised minimal residual (GMRES) method and a generalised successive over-relaxation (GSOR) method to linear systems of equations that are derived from boundary integral equations. Attention is restricted to systems of the form ∑Nj=1Hijxj=ci, i=1,2,…,N, where Hij are matrices, xj and ci are column vectors. The integer N denotes the number of domains and these systems are solved by adapting techniques initially devised for solving single-domain problems. These techniques include parameter matrix accelerated GMRES and GSOR in combination with a multiplicative Schwarz method for non-overlapping domains. The multiplicative Schwarz method is a generalised form of the block Gauss–Seidel method and is called the generalised multi-domain iterative procedure. A new form of coarse grid preconditioning is applied to limit the convergence dependence on block numbers. The coarse preconditioner is obtained from a crude representation of the global system of equations. Attention is restricted to thermal problems with domains connected through resistive thermal barriers. The effect of lowering and increasing the thermal resistance between domains is investigated. The coarse preconditioner requires a more accurate representation on interfaces with lower thermal resistance. Computation times are determined for the iterative procedures and for elimination techniques indicating the relative benefits for problems of this nature.  相似文献   

14.
Sorting is a classic problem and one to which many others reduce easily. In the streaming model, however, we are allowed only one pass over the input and sublinear memory, so in general we cannot sort. In this paper we show that, to determine the sorted order of a multiset s of size n containing σ?2 distinct elements using one pass and o(nlogσ) bits of memory, it is generally necessary and sufficient that its entropy H=o(logσ). Specifically, if s={s1,…,sn} and si1,…,sin is the stable sort of s, then we can compute i1,…,in in one pass using O((H+1)n) time and O(Hn) bits of memory, with a simple combination of classic techniques. On the other hand, in the worst case it takes that much memory to compute any sorted ordering of s in one pass.  相似文献   

15.
We propose a discrete approximation of Gaussian curvature over quadrilateral meshes using a linear combination of two angle deficits. Let gij and bij be the coefficients of the first and second fundamental forms of a smooth parametric surface F. Suppose F is sampled so that a surface mesh is obtained. Theoretically we show that for vertices of valence four, the considered two angle deficits are asymptotically equivalent to rational functions in gij and bij under some special conditions called the parallelogram criterion. Specifically, the numerators of the rational functions are homogenous polynomials of degree two in bij with closed form coefficients, and the denominators are . Our discrete approximation of the Gaussian curvature derived from the combination of the angle deficits has quadratic convergence rate under the parallelogram criterion. Numerical results which justify the theoretical analysis are also presented.  相似文献   

16.
This paper proposes a two-stage feedforward neural network (FFNN) based approach for modeling fundamental frequency (F0) values of a sequence of syllables. In this study, (i) linguistic constraints represented by positional, contextual and phonological features, (ii) production constraints represented by articulatory features and (iii) linguistic relevance tilt parameters are proposed for predicting intonation patterns. In the first stage, tilt parameters are predicted using linguistic and production constraints. In the second stage, F0 values of the syllables are predicted using the tilt parameters predicted from the first stage, and basic linguistic and production constraints. The prediction performance of the neural network models is evaluated using objective measures such as average prediction error (μ), standard deviation (σ) and linear correlation coefficient (γX,Y). The prediction accuracy of the proposed two-stage FFNN model is compared with other statistical models such as Classification and Regression Tree (CART) and Linear Regression (LR) models. The prediction accuracy of the intonation models is also analyzed by conducting listening tests to evaluate the quality of synthesized speech obtained after incorporation of intonation models into the baseline system. From the evaluation, it is observed that prediction accuracy is better for two-stage FFNN models, compared to the other models.  相似文献   

17.
In many real-life applications, physical considerations lead to the necessity to consider the smoothest of all signals that is consistent with the measurement results. Usually, the corresponding optimization problem is solved in statistical context. In this paper, we propose a quadratic-time algorithm for smoothing aninterval function. This algorithm, givenn+1 intervals x0, ..., x n with 0 ∈ x0 and 0 ∈ x n , returns the vectorx 0, ...,x n for whichx 0=x 0=0,x i ∈ x i , and Σ(x i+1?x i )2 → min.  相似文献   

18.
This paper deals with the distribution of the random variable V which is the smaller of two correlated F-variables. The distribution of V arises in many statistical problems including analysis of variance, selection and ordering populations, and in some two-stage estimation procedures. In this paper we give the probability density function and the moments of V in simple forms. Also, we show an algorithm to compute the lower and upper percentage pointsof V. The method is exact and no interpolations are involved.  相似文献   

19.
20.
Liu (2000) considered maximum likelihood estimation and Bayesian estimation in a binomial model with simplex constraints using the expectation-maximization (EM) and data augmentation (DA) algorithms. By introducing latent variables {Zij} and {Yij} (to be defined later), he formulated the constrained parameter problem into a missing data problem. However, the derived DA algorithm does not work because he actually assumed that the {Yij} are known. Furthermore, although the final results from the derived EM algorithm are correct, his findings are based on the assumption that the {Yij} are observable. This note provides a correct DA algorithm. In addition, we obtained the same E-step and M-step under the assumption that the {Yij} are unobservable. A real example is used for illustration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号