首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The manufacturing industry has prioritised enhancing the quality, lifetime and conforming rate of products. Process capability indices (PCIs) are used to measure process potential and performance. The process capability is evaluated with product survival time and a longer lifetime implies a better process capability and a higher reliability. In order to save experimental time and cost, a censored sample arises in practice. In the case of product possessing a two-parameter exponential distribution, this study constructs a uniformly minimum variance unbiased estimator (UMVUE) of the lifetime performance index based on the type II right-censored sample. Then the UMVUE of the lifetime performance index is utilised to develop the new hypothesis testing procedure in the condition of known lower specification limit. Finally, two practical examples are illustrated to employ the testing procedure to determine whether the product is reliable.  相似文献   

2.
This paper presents a completely hierarchical two dimensional curved beam element formulation where the element displacement field can be of arbitrary polynomial orders pξ and pη in the axial and the transverse directions of the element. The approximation functions and the corresponding nodal variables for the beam element are derived by first constructing the hierarchical one dimensional approximation functions of orders pξ and pη, and the corresponding hierarchical nodal variable operators for each of the two directions ξ and η and then taking their product. This procedure yields approximation functions and nodal variables for the curved beam element that correspond to polynomial orders pξ and pη in ξ and η directions. The element approximation is hierarchical, i.e. the approximation functions and the nodal variables are both hierarchical. Thus, the element matrix and the load vectors corresponding to the polynomial orders pξ and pη are a subset of those corresponding to the polynomial orders (pξ + 1) and (pη + 1). The element formulation ensures C0 continuity.

The element properties are derived using the principle of virtual work and the hierarchical element displacement approximation. The element geometry is constructed using the coordinates of the nodes located on the elastic axis of the element and the node point vectors indicating nodal depths and the element width at the nodes. The orders of approximation along the length of the element as well as in the transverse direction can be chosen independently to obtain optimum (maximum) rate of convergence.

Numerical examples are presented to demonstrate the accuracy, simplicity of modeling, effectiveness, faster rate of convergence and overall superiority of the present formulation. Results obtained from the present formulation are also compared with h-approximation models and available analytical solutions.  相似文献   


3.
Process capability indices (PCIs) are used to measure process potential and performance. This study constructs an uniformly minimum variance unbiased estimator (UMVUE) of the lifetime performance index based on the upper record values for Weibull lifetime model. Then the UMVUE of the lifetime performance index is utilized to develop the new hypothesis testing procedure in the condition of known lower specification limit. Finally, two examples are presented to assess the behavior of this test statistic for testing null hypothesis under given significance level. Moreover, the product managers can then employ the new testing procedure to determine whether the process adheres to the required level.  相似文献   

4.
In this paper, model sets for linear time-invariant systems spanned by fixed pole orthonormal bases are investigated. The obtained model sets are shown to be complete in Lp(T) (1<p<∞), the Lebesque spaces of functions on the unit circle T, and in C(T), the space of periodic continuous functions on T. The Lp norm error bounds for estimating systems in Lp(T) by the partial sums of the Fourier series formed by the orthonormal functions are computed for the case 1<p<∞. Some inequalities on the mean growth of the Fourier series are also derived. These results have application in estimation and model reduction.  相似文献   

5.
The aim of this paper is double. First, we point out that the hypothesis D(t1)D(t2) = D(t2)D(t1) imposed in [1] can be removed. Second, a constructive method for obtaining analytic-numerical solutions with a prefixed accuracy in a bounded domain Ω(t0,t1) = [0,p] × [t0,t1], for mixed problems of the type ut(x,t) − D(t)uxx(x,t) = 0, 0 < x < p, t> 0, subject to u(0,t) = u(p,t) = 0 and u(x,0) = F(x) is proposed. Here, u(x,t) and F(x) are r-component vectors, D(t) is a Cr × r valued analytic function and there exists a positive number δ such that every eigenvalue z of (1/2) (D(t) + D(t)H) is bigger than δ. An illustrative example is included.  相似文献   

6.
A polynomial is said to be of type (p1, p2, p3) relative to a directed line in the complex plane if, counting multiplicities, it has p1 zeros to the left of, p2 zeros on, and p3 zeros to the right of the line. In this paper we determine explicitly the types of all polynomials belonging to a very restricted (but infinite) family of polynomials. A polynomial ƒ belongs to this family if and only if its coefficients are such that the polynomial ƒ*(0)ƒ(z)−ƒ(0) ƒ*(z) is a monomial; here ƒ* denotes the reflection of ƒ in the directed line.

A special case of the present result appeared in an earlier publication.  相似文献   


7.
In the service (or manufacturing) industries, process capability indices (PCIs) are utilised to assess whether product quality meets the required level. And the lifetime performance index (or larger-the-better PCI) CL is frequently used as a means of measuring product performance, where L is the lower specification limit. Hence, this study first uses the max p-value method to select the optimum value of the shape parameter β of the Weibull distribution and β is given. Second, we also construct the maximum likelihood estimator (MLE) of CL based on the type II right‐censored sample from the Weibull distribution. The MLE of CL is then utilised to develop a novel hypothesis testing procedure provided that L is known. Finally, we give one practical example to illustrate the use of the testing procedure under given significance level α.  相似文献   

8.
H. Chen  K.S. Surana   《Computers & Structures》1993,48(6):1041-1056
This paper presents a piecewise hierarchical p-version finite element formulation for laminated composites axisymmetric solids for linear static analysis. The element formulation incorporates higher order deformation theories and is in total agreement with the physics of deformation in laminated composites. The element geometry is defined by eight nodes located on the boundaries of the element. The lamina thicknesses are used to create a nine-node p-version configuration for each lamina of the element. The displacement approximation for the element is piecewise hierarchical and is developed by first establishing a hierarchical displacement approximation for the nine-node configuration of each lamina of the laminate and then imposing interlamina continuity condition of displacements at the interfaces between laminas. The hierarchical approximation functions and the corresponding nodal variables for each lamina are derived from the Lagrange family of interpolation functions and can be of arbitrary polynomial order pc and kpη in the ε and kη directions for a typical lamina k. The formulation ensures C0 continuity, i.e., continuity of displacement across interelement as well as interlamina boundaries.

The element properties are constructed by assembling individual lamina properties which are derived using the principle of virtual work and the hierarchical displacement approximation for the laminas. Transformation matrices, formed based on interlamina continuity conditions, are used to transform each lamina's degrees of freedom into the degrees of freedom for the laminate. Thus, each individual lamina stiffness matrix and equivalent load vector are transformed and then summed to establish the laminate stiffness matrix and equivalent load vector. There is no restriction on either the number of laminas or their lay-up pattern. Each lamina can be generally orthotropic and the material directions and the layer thickness may vary from point to point within each lamina.

Numerical examples are presented to demonstrate the effectiveness, modeling convenience, accuracy, and overall superiority of the present formulation for laminated composite axisymmetric solids and shells.  相似文献   


9.
This paper presents a p-version geometrically nonlinear (GNL) formulation based on total Lagrangian approach for a three-node axisymmetric curved shell element. The approximation functions and the nodal variables for the element are derived directly from the Lagrange family of interpolation functions of order pξ and pη. This is accomplished by first establishing one-dimensional hierarchical approximation functions and the corresponding nodal variable operators in the ξ and η directions for the three- and one-node equivalent configurations that correspond to pξ + 1 and pη+ 1 equally spaced nodes in the ξ and η directions and then taking their products. The resulting element approximation functions and the nodal variables are hierarchical and the element approximation ensures C0 continuity. The element geometry is described by the coordinates of the nodes located on the middle surface of the element and the nodal vectors describing top and bottom surfaces of the element.

The element properties are established using the principle of virtual work and the hierarchical element approximation. In formulating the properties of the element complete axisymmetric state of stresses and strains are considered, hence the element is equally effective for very thin as well as extremely thick shells. The formulation presented here removes virtually all of the drawbacks present in the existing GNL axisymmetric shell finite element formulations and has many additional benefits. First, the currently available GNL axisymmetric shell finite element formulations are based on fixed interpolation order and thus are not hierarchical and have no mechanism for p-level change. Secondly, the element displacement approximations in the existing formulations are either based on linearized (with respect to nodal rotation) displacement fields in which case a true Lagrangian formulation is not possible and the load step size is severely limited or are based on nonlinear nodal rotation functions approach in which case though the kinematics of deformation is exact but additional complications arise due to the noncummutative nature of nonlinear nodal rotation functions. Such limitations and difficulties do not exist in the present formulation. The hierarchical displacement approximation used here does not involve traditional nodal rotations that have been used in the existing shell element formulations, thus the difficulties associated with their use are not present in this formulation.

Incremental equations of equilibrium are derived and solved using the standard Newton method. The total load is divided into increments, and for each increment of load, equilibrium iterations are performed until each component of the residuals is within a preset tolerance. Numerical examples are presented to show the accuracy, efficiency and advantages of the preset formulation. The results obtained from the present formulation are compared with those available in the literature.  相似文献   


10.
The investigations focus on the construction of a Ck-continuous (k=0,1,2) interpolating spline-surface for a given data set consisting of points Pijk arranged in a regular triangular net and corresponding barycentric parameter triples (ui,vj,wk). We try to generalize an algorithm by A.W. Overhauser who solved the analogous problem for the case of a univariate data set. As a straightforward generalization does not work out we adapt the Overhauser-construction. We use some blending of basic surfaces with uniquely determined basic functions. This yields a spline-surface with a polynomial parametric representation which display C1- or C2-continuity along the common curve of two adjacent sub-patches. Local control of the emerging spline surface is provided which means moving one data point P changes only some of the sub-patches around P and does not affect regions lying far away.  相似文献   

11.
Michael Hoch 《Calphad》1996,20(4):511-519
We analyzed the thermodynamic data of the binary system Bi2O3-B2O3. The phase diagram contains five compounds, of which four melt congruently, and a miscibility gap, close to pure B2O3. The three sets of transformation data of Bi2O3, the enthalpy of mixing data in the liquid, and the critical point of the miscibility gap gave five equations for the excess Gibbs energy of mixing and five sets of values for the enthalpies of formation of the binary compounds. The critical point data yielded another set of transformation data for Bi2O3. Using the monotectic composition and the enthalpy of mixing data also yielded a set of transformation data for Bi2O3. The values obtained using the enthalpy of mixing data are the most reliable values.  相似文献   

12.
A graph G was defined in [16] as P4-reducible, if no vertex in G belongs to more than one chordless path on four vertices or P4. A graph G is defined in [15] as P4-sparse if no set of five vertices induces more than one P4, in G. P4-sparse graphs generalize both P4-reducible and the well known class of p4-free graphs or cographs. In an extended abstract in [11] the first author introduced a method using the modular decomposition tree of a graph as the framework for the resolution of algorithmic problems. This method was applied to the study of P4-sparse and extended P4-sparse graphs.

In this paper, we begin by presenting the complete information about the method used in [11]. We propose a unique tree representation of P4-sparse and a unique tree representation of P4-reducible graphs leading to a simple linear recognition algorithm for both classes of graphs. In this way we simplify and unify the solutions for these problems, presented in [16–19]. The tree representation of an n-vertex P4-sparse or a P4-reducible graph is the key for obtaining O(n) time algorithms for the weighted version of classical optimization problems solved in [20]. These problems are NP-complete on general graphs.

Finally, by relaxing the restriction concerning the exclusion of the C5 cycles from P4-sparse and P4-reducible graphs, we introduce the class of the extended P4-sparse and the class of the extendedP4-reducible graphs. We then show that a minimal amount of additional work suffices for extending most of our algorithms to these new classes of graphs.  相似文献   


13.
A method of obtaining reduced order models for multivariable systems is described. It is shown that the method has several advantages, e.g. the reduced order models retain the steady-state value and stability of the original system. Irrespective of whether the original multivariable system is described in state space form or in the transfer matrix form, the proposed method yields the reduced order models in state space form. In this method the Routh approximation is used to formulate the common denominator polynomial of a reduced order model. This is used to describe the structure of Ar matrix. The matrices Br and Cr are chosen appropriately and some of the elements of Br/Cr matrices are specified in such a way that after matching time moments/Markov parameters, the resulting equations are linear in the unknown elements of Br and Cr matrices. The procedure is illustrated via a numerical example.  相似文献   

14.
A statistically coherent view of confounding motivated by the over controversy over the proper control of confounding in the presence of prior knowledge is presented. Confounding by a covariate C in the presence of data on C is distinguished from confounding in the absence of data on C. A covariate C is defined to be a nonconfounder in the absence of data on C if the population parameter of interest can be unbiasedly estimated (asymptotically) absent data on C. Under this definition, C may be a confounder for some parameters of interest and a nonconfounder for others. If C is a confounder for a parameter of interest that has a causal interpretation, we call C a causal confounder. When data on C are available, C is defined to be a nonconfounder for a particular parameter of interest if and only if inference on the parameter of interest does not depend on the data through C. Bayesians, frequentists and pure likelihoodists will in general agree on the prior knowledge necessary to render C a non-confounder. In particular C will in general be a nonconfounder precisely when the crude data ignoring C are S-sufficient for the parameter of interest. The intuitive view held by many practicing epidemiologists that confounding by C represents a bias of the unadjusted crude estimator is in a sense correct provided inference is performed conditional on approximate ancillary statistics that measure the degree to which associations in the data differ due to sampling variability from those population associations known a priori.  相似文献   

15.
Let ( ,(+1)n) be the adic system associated to the substitution: 1 → 12,…,(n − 1) → 1n, n → 1. In Sirvent (1996) it was shown that there exist a subset Cn of and a map hn: CCn such that the dynamical system (C, hn) is semiconjugate to ( ). In this paper we compute the Hausdorff and Billingsley dimensions of the geometrical realizations of the set Cn on the (nl)-dimensional torus. We also show that the dynamical system (Cn,hn) cannot be realized on the (n − 1)-torus.  相似文献   

16.
Let C1 be the class of finitely presented monoids with word problem solvable in linear time. Let P be a Markov property of monoids related to class C1 in some sense. It is undecidable given a monoid in C1 whether it satisfies P. Let C and C′ be classes of finitely presented monoids with word problem solvable in some time-bounds. If C contains C1 and C′ properly contains C, then it is undecidable given a monoid in C′ whether it belongs to C.  相似文献   

17.
We have previously proposed a concept of p-valued-input, q-valued-output threshold logic, namely (p, q)-threshold logic, where 2 qp, 3p, and suggested that p-valued logical networks with costs as low as those of 2-valued logical networks could be obtained, by using the (p, q)-threshold elements with small values of q. In this paper, we describe (1) the condition under which there is a 2-place (p, q)-adic function such that the output-closed set F, generated only from , is (p, q)-logically complete, and (2) the fact that any n-place(p, q)-adic function can be realized using at most O(n) elements in the above F.  相似文献   

18.
The determination of the rotational symmetry of a shape is useful for object recognition and shape analysis in computer vision applications. In this paper, a simple, but effective, algorithm to analyse the rotational symmetry of a given closed-curve shape S is proposed. A circle C with the centroid of S as the circle center and the average radius of S as the circle radius is superimposed on S, resulting in the intersection of C and S at a set of points. By theoretical analysis, the relationship between the order Ns of the rotational symmetry of S and the number of intersection points between C and S is established. All the possible values for Ns are determined. And finally Ns is determined by evaluating the similarity between S and its rotated versions. In the proposed algorithm, only simple pixel operations and second-order moment function computation are involved. Several problems caused by the use of discrete image coordinates are analysed and solved for correct decision-making in the algorithm. Good experimental results are also included.  相似文献   

19.
Designers must target realistic faults if they desire high-quality test and diagnosis of CMOS circuits. The authors propose a strategy for generating high-quality IDDQ test patterns for bridging faults. They use a standard ATPG tool for stuck-at faults that adapts to target bridging faults via IDDQ testing. The authors discuss IDDQ test set diagnosis capability and specifically generated vectors that can improve diagnosability, and provide test and diagnosis results for benchmark circuits  相似文献   

20.
Let S = {C1, …, Cm} be a set of clauses in the propositional calculus and let n denote the number of variables appearing these clauses. We present and O(mn) time algorithm to test whether S can be renamed as a Horn set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号