首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
Consider (m + 1)-dimensional, m ≥ 1, diffeomorphisms that have a saddle fixed point O with m-dimensional stable manifold Ws(O), one-dimensional unstable manifold Wu(O), and with the saddle value σ different from 1. We assume that Ws(O) and Wu(O) are tangent at the points of some homoclinic orbit and we let the order of tangency be arbitrary. In the case when σ < 1, we prove necessary and sufficient conditions of existence of topological horseshoes near homoclinic tangencies. In the case when σ > 1, we also obtain the criterion of existence of horseshoes under the additional assumption that the homoclinic tangency is simple.  相似文献   

2.
The minimum covariance determinant (MCD) method of Rousseeuw is a highly robust estimator of multivariate location and scatter. Its objective is to find h observations (out of n) whose covariance matrix has the lowest determinant. Until now, applications of the MCD were hampered by the computation time of existing algorithms, which were limited to a few hundred objects in a few dimensions. We discuss two important applications of larger size, one about a production process at Philips with n = 677 objects and p = 9 variables, and a dataset from astronomy with n = 137,256 objects and p = 27 variables. To deal with such problems we have developed a new algorithm for the MCD, called FAST-MCD. The basic ideas are an inequality involving order statistics and determinants, and techniques which we call “selective iteration” and “nested extensions.” For small datasets, FAST-MCD typically finds the exact MCD, whereas for larger datasets it gives more accurate results than existing algorithms and is faster by orders of magnitude. Moreover, FASTMCD is able to detect an exact fit—that is, a hyperplane containing h or more observations. The new algorithm makes the MCD method available as a routine tool for analyzing multivariate data. We also propose the distance-distance plot (D-D plot), which displays MCD-based robust distances versus Mahalanobis distances, and illustrate it with some examples.  相似文献   

3.
Methods are presented for the determination of the alias matrix of two-factor interactions for the orthogonal main-effect 2 n 3 m plans catalogued by Addelman and Kempthorne. This catalogue includes Placket-Burman designs and designs obtained by replacement in 2 n–p plans or collapsing in 3 m–m plans. Systematic methods are included to facilitate the data computations. For standard r n–p factorial designs, techniques are given to determine a set of live factors, a generating set of linear sum congruences and the alias matrix. Additional orthogonal main-effect 2 n 3 m designs are constructed to supplement the Addelman-Kempthorne catalogue of designs.  相似文献   

4.
This study addresses the identical parallel machine scheduling problem with the objective of minimizing makespan subject to minimum total absolute deviation of job completion time (TADC). An optimization algorithm is first proposed to solve TADC on an identical parallel machine and an iterative procedure based on a polynomial binary integer programming model is then proposed to minimize makespan. Computational experiments show that the proposed algorithm is efficient. The worst case performance, which refers to the largest average execution for each scenario of the experiments, is 229.10 seconds for the problem with n=200, m=30 and p j from a uniform [1, 100].  相似文献   

5.
Fries and Hunter (1980) presented a practical algorithm for selecting standard 2 n–m fractional factorial designs based on a criterion they called “minimum aberration.” In this article some simple results are presented that enable the Fries–Hunter algorithm to be used for a wider range of n and m and for designs with factors at p levels where p ≥ 2 is prime. Examples of minimum aberration 2 n–m designs with resolution R ≥ 4 are given for m, nm < 9. A matrix is given for generating 3 n–m designs with m, nm ≤ 6, which have, or nearly have, minimum aberration.  相似文献   

6.
W. González-Manteiga  R. Cao 《TEST》1993,2(1-2):161-188
Summary Given the modelY i =m i )+ɛi,whereE(ɛ i) =0,X i Ci=1, ...,n, andC is ap-dimensional compact set, we have designed a new method for testing the hypothesis that the regression function follows a general linear model,m(·) ∈ {m θ(·) =A t (·)θ}θ∈Θ⊂ℛq , withA a function from p to q. The statistic, denoted ΔASE, used fortesting the given hypothesis is defined to be the difference between the average squared errors (ASE) associated with the non-parametric estimator ofm and the minimum distance parametric estimator ofm. The asymptotic normality of both ΔASE and the minimum distance estimators is proved under general conditions. Alternative bootstrap versions of ΔASE are also considered.  相似文献   

7.
Rita Malhotra  H. L. Bhatia 《TEST》1984,35(3):331-338
The present paper develops techniques to solve two variants of the time minimizing assignment problem. In the first, there aren jobs to be assigned tom establishmets (m<n) in such a way that the time taken to complete all the jobs is the minimum, it being assumed that all the jobs are commenced simultaneously. The second variant is an extension of the first one in the sense that an additional constraint on the minimum number of jobs to be taken up by each establishment is introduced. Numerical examples are included to illustrate the techniques.  相似文献   

8.
Reconstructing a free-form surface from 3-dimensional (3D) noisy measurements is a central problem in inspection, statistical quality control, and reverse engineering. We present a new method for the statistical reconstruction of a free-form surface patch based on 3D point cloud data. The surface is represented parametrically, with each of the three Cartesian coordinates (x, y, z) a function of surface coordinates (u, v), a model form compatible with computer-aided-design (CAD) models. This model form also avoids having to choose one Euclidean coordinate (say, z) as a “response” function of the other two coordinate “locations” (say, x and y), as commonly used in previous Euclidean kriging models of manufacturing data. The (u, v) surface coordinates are computed using parameterization algorithms from the manifold learning and computer graphics literature. These are then used as locations in a spatial Gaussian process model that considers correlations between two points on the surface a function of their geodesic distance on the surface, rather than a function of their Euclidean distances over the xy plane. We show how the proposed geodesic Gaussian process (GGP) approach better reconstructs the true surface, filtering the measurement noise, than when using a standard Euclidean kriging model of the “heights”, that is, z(x, y). The methodology is applied to simulated surface data and to a real dataset obtained with a noncontact laser scanner. Supplementary materials are available online.  相似文献   

9.
Recent studies on the effects of parameter estimation on control charts have focused on their conditional in‐control (IC) performance and recommended either the minimum number of Phase I samples (m) or adjustments to the control limit factor (L) that guarantee a desired IC performance with a high probability. In most cases, the numbers of samples required are prohibitively large in practice, and the adjustments for smaller numbers of samples entail as a counterpart a deterioration of the chart's out‐of‐control (OOC) performance. This presents the user with a hard decision, in which he or she will have difficulty in finding the best compromise between the objectives of good (or acceptable) IC performance, OOC performance, and a practicable number of Phase I samples. Therefore, in the context of the S2 chart, we propose a new approach that takes both the desired IC and OOC performances (that should be within specified tolerances with a specified high joint probability) as constraints for the optimization of the pair (L, m). This is the first work that simultaneously treats the choice of m and the control limit adjustment in the framework of an optimization problem. With our model, the user can automatically obtain the most feasible (minimum m) solution that satisfies his/her requirements on both the IC and OOC performances.  相似文献   

10.
Abstract

Correlation of acoustic emission (AE) total counts N with stress intensity factor K and plastic zone size rp has been examined for data generated during tensile deformation of notched specimens of nuclear and commercial grade AISI type 304 stainless steel. The variation of total AE counts with stress intensity factor in log-log scale has indicated that N is related to K as N=AKm where A and m are constants but the magnitudes of A and m are different at low and high K regimes. Both the steels indicate higher values of m up to macroyielding than those obtained from analysis of AE data between macroyielding and the stress corresponding to K max values in the experiments. The magnitudes of m were found to be higher for the commercial grade steel than that for the nuclear grade one and dependent on thickness. Analysis of AE data has also indicated that the relationship between N and r p can be expressed by the equation N=αrβp. The value of the constant β was experimentally found to be 1.3 via microhardness measurements. This value is higher than the theoretically assumed value of β~ 1in the literature. The value of m in the equation N=AKm can be obtained from the value of the exponent β of the relationship N=α rβ as m=2β and such values of m can be correlated to the directly estimated values of the exponent in the N-K relationship. The amounts of strain induced α' martensite formed at notch tips in the two steels were found to increase with increasing applied stress below the nominal yield strength of unnotched specimens.  相似文献   

11.
The problem of finding a nearly optimal distribution of polynomial degrees on a fixed finite element mesh is discussed. An a posteriori error estimator based on the minimum complementary energy principle is proposed which utilizes the displacement vector field computed from the finite element solution. This estimator, designed for p- and hp-extensions, is conceptually different from estimators based on residuals or patch recovery which are designed for h-extension procedures. The quality of the error estimator is demonstrated by examples. The results show that the effectivity index is reasonably close to unity and the sequences of p-distributions obtained with the error indicators closely follow the optimal trajectory. © 1998 John Wiley & Sons, Ltd.  相似文献   

12.
Use of Hotelling's T2 charts with high breakdown robust estimates to monitor multivariate individual observations are the recent trend in the control chart methodology. Vargas (J. Qual. Tech. 2003; 35: 367‐376) introduced Hotelling's T2 charts based on the minimum volume ellipsoid (MVE) and the minimum covariance determinant (MCD) estimates to identify outliers in Phase I data. Studies carried out by Jensen et al. (Qual. Rel. Eng. Int. 2007; 23: 615‐629) indicated that the performance of these charts heavily depends on the sample size, amount of outliers and the dimensionality of the Phase I data. Chenouri et al. (J. Qual. Tech. 2009; 41: 259‐271) recently proposed robust Hotelling's T2 control charts for monitoring Phase II data based on the reweighted MCD (RMCD) estimates of the mean vector and covariance matrix from Phase I. They showed that Phase II RMCD charts have better performance compared with Phase II standard Hotelling's T2 charts based on outlier free Phase I data, where the outlier free Phase I data were obtained by applying MCD and MVE T2 charts to historical data. Reweighted MVE (RMVE) and S‐estimators are two competitors of the RMCD estimators and it is a natural question whether the performance of Phase II Hotelling's T2 charts with RMCD and RMVE estimates exhibits similar pattern observed by Jensen et al. (Qual. Rel. Eng. Int. 2007; 23: 615‐629) in the case of MCD and MVE‐based Phase I Hotelling's T2 charts. In this paper, we conduct a comparative study to assess the performance of Hotelling's T2 charts with RMCD, RMVE and S‐estimators using large number of Monte Carlo simulations by considering different data scenarios. Our results are generally in favor of the RMCD‐based charts irrespective of sample size, outliers and dimensionality of Phase I data. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

13.
Numerical optimisation of superplastic deformation   总被引:1,自引:0,他引:1  
Based on an approach due to Padmanabhan and Davies, a multi-dimensional regression analysis has been developed which predicts the superplastic deformation parameters ofm (the strain-rate sensitivity index) andK (the strength parameter) as functions of strain rate, grain size and temperature. Further analysis enables the optimisation of the operating conditions (for minimum power consumption) through a prediction of the external load and power consumption using the predicted values ofm andK. The procedure has been validated by applying it for the analysis of the experimental data pertaining to the tin-lead eutectic alloy. It has been pointed out that the technique could be useful for problems (not necessarily in the area of superplasticity) where a particular parameter depends on a number of independent variables.  相似文献   

14.
One of the main difficulties that a reduced‐order method could face is the poor separability of the solution. This problem is common to both a posteriori model order reduction (proper orthogonal decomposition, reduced basis) and a priori [proper generalized decomposition (PGD)] model order reduction. Early approaches to solve it include the construction of local reduced‐order models in the framework of POD. We present here an extension of local models in a PGD—and thus, a priori—context. Three different strategies are introduced to estimate the size of the different patches or regions in the solution manifold where PGD is applied. As will be noticed, no gluing or special technique is needed to deal with the resulting set of local reduced‐order models, in contrast to most proper orthogonal decomposition local approximations. The resulting method can be seen as a sort of a priori manifold learning or nonlinear dimensionality reduction technique. Examples are shown that demonstrate pros and cons of each strategy for different problems.  相似文献   

15.
Least-squares estimation (LSE) based on Weibull probability plot (WPP) is the most basic method for estimating the Weibull parameters. The common procedure of this method is using the least-squares regression of Y on X, i.e. minimizing the sum of squares of the vertical residuals, to fit a straight line to the data points on WPP and then calculate the LS estimators. This method is known to be biased. In the existing literature the least-squares regression of X on Y, i.e. minimizing the sum of squares of the horizontal residuals, has been used by the Weibull researchers. This motivated us to carry out this comparison between the estimators of the two LS regression methods using intensive Monte Carlo simulations. Both complete and censored data are examined. Surprisingly, the result shows that LS Y on X performs better for small, complete samples, while the LS X on Y performs better in other cases in view of bias of the estimators. The two methods are also compared in terms of other model statistics. In general, when the shape parameter is less than one, LS Y on X provides a better model; otherwise, LS X on Y tends to be better.  相似文献   

16.
A regression methodology is introduced that obtains competitive, robust, efficient, high‐breakdown regression parameter estimates as well as providing an informative summary regarding possible multiple outlier structure. The proposed method blends a cluster analysis phase with a controlled bounded influence (BI) regression phase, thereby referred to as cluster‐based bounded influence regression, or CBI. Representing the data space via a special set of anchor points, a collection of point‐addition OLS regression estimators forms the basis of a metric used in defining the similarity between any two observations. Cluster analysis then yields a main cluster ‘half‐set’ of observations, with the remaining observations comprising one or more minor clusters. An initial regression estimator arises from the main cluster, with a group‐additive DFFITS argument used to carefully activate the minor clusters through a BI regression frame work. CBI achieves a 50% breakdown point, is regression equivariant, scale and affine equivariant and distributionally is asymptotically normal. Case studies and Monte Carlo results demonstrate the performance advantage of CBI over other popular robust regression procedures regarding coefficient stability, scale estimation and standard errors. The dendrogram of the clustering process and the weight plot are graphical displays available for multivariate outlier detection. Overall, the proposed methodology represents advancement in the field of robust regression, offering a distinct philosophical view point towards data analysis and the marriage of estimation with diagnostic summary. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
M. Sorum 《技术计量学》2013,55(4):935-943
The problem is to estimate the expected and the optimal probabilities of misclassification in the context of the two group p-dimensional normal classification problem with means and common covariance matrix unknown and a rule based on the linear discriminant function. Performance of several estimators is compared by means of a computer sampling study. For larger p (p = 20) certain estimators are definitely superior for each of the probabilities, while for small p there is less differentiation in performance.  相似文献   

18.
Toric codes are a class of m-dimensional cyclic codes introduced recently by Hansen (Coding theory, cryptography and related areas (Guanajuato, 1998), pp 132–142, Springer, Berlin, 2000; Appl Algebra Eng Commun Comput 13:289–300, 2002), and studied in Joyner (Appl Algebra Eng Commun Comput 15:63–79, 2004) and Little and Schenck (SIAM Discrete Math, 2007). They may be defined as evaluation codes obtained from monomials corresponding to integer lattice points in an integral convex polytope P í \mathbbRmP \subseteq {\mathbb{R}}^m . As such, they are in a sense a natural extension of Reed–Solomon codes. Several articles cited above use intersection theory on toric varieties to derive bounds on the minimum distance of some toric codes. In this paper, we will provide a more elementary approach that applies equally well to many toric codes for all m 3 2m \ge 2 . Our methods are based on a sort of multivariate generalization of Vandermonde determinants that has also been used in the study of multivariate polynomial interpolation. We use these Vandermonde determinants to determine the minimum distance of toric codes from simplices and rectangular polytopes. We also prove a general result showing that if there is a unimodular integer affine transformation taking one polytope P 1 to a second polytope P 2, then the corresponding toric codes are monomially equivalent (hence have the same parameters). We use this to begin a classification of two-dimensional cyclic toric codes with small dimension.  相似文献   

19.
The discrete (Legendre) orthogonal polynomials, (DLOP's) are useful for approximation purposes. This set of mth degree polynomials {Pm(K, N)} are orthogonal with unity weight over a uniform discrete interval and are completely determined by the normalization Pm(O, N) = 1. The authors are employing these polynomials as assumed modes in engineering applications of weighted residual methods. Since extensive material on these discrete orthogonal polynomials, and their properties, is not readily available, this paper is designed to unify and summarize the presently available information on the DLOP's and related polynomials. In so doing, many new properties have been derived. These properties, along with sketches of their derivation, are included. Also presented are a representation of the DLOP's as a product of vectors and matrices, and an efficient computational scheme for generating these polynomials.  相似文献   

20.
The outlying tendency of any case in a multiple regression of p predictors may be estimated by drawing all subsets of size p from the remaining cases and fitting the model. Each such subset yields an elemental residual for the case in question, and a suitable summary statistic of them can be used as an estimate of the case's outlying tendency. We propose two such summary statistics: an unweighted median, which is of bounded influence, and a weighted median, which is more efficient but less robust. The computational load of the procedure is reduced by using random samples in place of the full set of subsets of size p. As a byproduct the method yields useful information on the influence (or leverage) of cases and the mutual masking of high leverage points.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号