首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
本文研究了同步多重工作休假M/M/c排队系统.通过建立拟生灭过程模型,给出率阵的解析表达式,求出了系统稳态的充要条件.在此条件下,利用不可约拟生灭过程生成元矩阵的UL型RG分解方法和矩阵几何解法得到系统状态的稳态分布,并推导出稳态队长和等待时间的概率分布等系统的性能指标.进一步分析了在服务台全忙的条件下,稳态队长的条件随机分解结构,给出了附加队长的概率分布.  相似文献   

2.
Three models of the merging of disperse cracks in the volume and on the surface of materials are discussed. Equations for the probability of the merging of any size of cracks are derived on the basis of two models. Crack-length distributions and the numeric characteristics of these distributions alone are utilized in the first and second models, respectively. A third model determines the probability of the merging of a crack of maximum length with any nearby crack. The relationships obtained can be used to predict the merging of accumulated defects on the basis of number and size, and also with respect to time.Translated from Problemy Prochnosti, No. 2, pp. 71–78, February, 1992.  相似文献   

3.
A complex product is often inspected more than once in a sequential manner to ensure the product’s quality. Based on the number of defects discovered during each round of inspection process, we can estimate the number of defects still remaining in the product. For each defect, the probability that the defect will be detected during each inspection cycle is usually assumed to be a known ‘constant’. However, in many practical situations, some defects are easily detected, while others are much more difficult to identify. In this paper, we propose a ‘beta-geometric’ inspection model in which the heterogeneity in detection probability is described by a beta distribution. In a numerical study, we show that our more realistic inspection model clearly outperforms traditional estimation methods that are based on the assumption of a constant detection probability.  相似文献   

4.
Equations related to spatial statistics of defects and probability of detecting defects in one-dimensional components have been derived. The equations related to spatial statistics of defects allow to estimate the probability of existence of safe, defect-free zones between the defects in one-dimensional components. It is demonstrated that even for a moderate defect number densities, the probability of existence of clusters of two or more defects at a critically small distance is substantial and should not be neglected in calculations related to risks of failure. The formulae derived have also important application in reliability and risk assessment studies related to calculation of the probability of clustering of evens on a given time interval. It is demonstrated that while for large tested fractions from one-dimensional components, the failures are almost entirely caused by a small part of the largest defects, for small tested fractions almost all defects participate as initiators of failure. The problem of non-destructive defect inspection of one-dimensional components has also been addressed. A general equation has been derived regarding the probability of detecting at least a single defect when only a fraction of the component is examined.  相似文献   

5.
In the manufacturing field, the assembly process heavily affects product final quality and cost. Specific studies, concerning the causes of the assembly defects, showed that operator errors account for high percentage of the total defects. Also, models linking the assembly complexity with the operator-induced defect rate were developed. Basing on these models, the present paper proposes a new paradigm for designing inspection strategies in case of short-run productions, for which traditional approaches may not be carried out. Specifically, defect generation models are developed to get a priori predictions of the probability of occurrence of defects, which are useful for designing effective inspection procedures. The proposed methodology is applied to a case study concerning the assembly of mechanical components in the manufacturing of hardness testing machines.  相似文献   

6.
Traditionally, upon solution, independent demand inventory models result in the determination of a closed form for the economic lot size. Generally, this is obtained from the result that holding costs and setup costs are constant and equal at the optimum. However, the experience of the Japanese indicates that this need not be the case. Specifically, setup cost may be reduced by investing in reduced setup times resulting in smaller lot sizes and increased flexibility. Various authors have investigated the impact of such investment on classical lot sizing formulas which has resulted in the derivation of modified relationships. A common assumption of this research has been that demand and lead time are deterministic. This paper extends this previous work by considering the more realistic case of investing in decreasing setup costs where lead time is stochastic. Closed form relationships for optimal lot size, optimal setup cost, optimal total cost, etc. are derived. Numerical results are presented for cases where lead times follow uniform and normal distributions. Sensitivity analysis is performed to indicate under what conditions investment is warranted.  相似文献   

7.
An approximation to the average run length for cumulative sum control charts is derived using the analogy between this procedure and the sequential probability ratio test for normal observations. This approximation is also derived by using a Brownian motion approximation to the cumulative sum. The Brownian motion approximation does not require the normality assumption. The analytical expression for the average run length obtained from the approximation is then used to determine the optimal choice of parameters to minimize the average run length at a specified deviation flom control, subject to a fixed average run length when in control.  相似文献   

8.
The paper presents a model that extends the stochastic finite element method to the modelling of transitional energetic–statistical size effect in unnotched quasibrittle structures of positive geometry (i.e. failing at the start of macro‐crack growth), and to the low probability tail of structural strength distribution, important for safe design. For small structures, the model captures the energetic (deterministic) part of size effect and, for large structures, it converges to Weibull statistical size effect required by the weakest‐link model of extreme value statistics. Prediction of the tail of extremely low probability such as one in a million, which needs to be known for safe design, is made feasible by the fact that the form of the cumulative distribution function (cdf) of a quasibrittle structure of any size has been established analytically in previous work. Thus, it is not necessary to turn to sophisticated methods such as importance sampling and it suffices to calibrate only the mean and variance of this cdf. Two kinds of stratified sampling of strength in a finite element code are studied. One is the Latin hypercube sampling of the strength of each element considered as an independent random variable, and the other is the Latin square design in which the strength of each element is sampled from one overall cdf of random material strength. The former is found to give a closer estimate of variance, while the latter gives a cdf with smaller scatter and a better mean for the same number of simulations. For large structures, the number of simulations required to obtain the mean size effect is greatly reduced by adopting the previously proposed method of random property blocks. Each block is assumed to have a homogeneous random material strength, the mean and variance of which are scaled down according to the block size using the weakest‐link model for a finite number of links. To check whether the theoretical cdf is followed at least up to tail beginning at the failure probability of about 0.01, a hybrid of stratified sampling and Monte Carlo simulations in the lowest probability stratum is used. With the present method, the probability distribution of strength of quasibrittle structures of positive geometry can be easily estimated for any structure size. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

9.
It is shown how the cumulative failure probability of an inhomogeneously stressed structure may be estimated when the material contains N different types of defect, each having its own fracture criterion. The theory is applied to:
  1. materials containing a distribution of randomly oriented cracks having various sizes,
  2. materials containing a distribution of cracks, the tips of which lie in material of differing fracture toughness.
It is also shown how to account for defects which grow in size because of fatigue and/or creep mechanisms. As an example the fatigue crack growth of defects, in a material having an inhomogeneous distribution of fracture toughness, is discussed. A method of accounting for defect nucleation during service is described.  相似文献   

10.
In contemporary modern and high volume production environments such as wafer manufacturing, a small sustained shift is not very easily detected in a short period of time, but may have a great impact on a manufacturing process. Thus, it is important to be able to detect and identify a small sustained shift of the production process in a timely manner and correct the undesired situation. The cumulative sum (CUSUM) control scheme is considered to be one of the efficient reference tools in detecting a small structure change in a process. However, for control of defects in a production process, too often the assumption is made that the defects follow a Poisson distribution. In practice, the process is more complex and the distributions of defects are more appropriately modeled by the compound Poisson distribution. In this paper, the underlying distribution is the geometric Poisson distribution, a Poisson distribution compounded by a geometric distribution, and the CUSUM control scheme based on the geometric Poisson process is addressed. An effective CUSUM control scheme can provide an adequate average run length (ARL), that can be obtained from the probability transition matrix for the Markov chain proposed by Brook and Evans (1972). With proper ARL selected, the geometric Poisson CUSUM control scheme is developed for process control.  相似文献   

11.

This study proposes two search models for multiple targets (random search and systematic search) in an unstructured search field, derived by generalizing search models for single target search. Whilst the probability of locating a single target in the random search model is typically exponentially distributed, the probability of locating multiple targets was found to be distributed hypo-exponentially. The systematic search model was extended from a piecewise-linear function for a single target to a piecewise-curve function for multiple targets. To test whether these search models could predict human search performance, first, visibility area in a fixation, a main component of search models, was investigated at various fixation durations. Sensitivity analysis of this data indicated that using short fixation duration and a small fixation overlap would produce better search performance. Next, the visibility area data was combined with the search models and compared to human performance on a free search field with three targets. The systematic and random models provided upper and lower boundaries of actual human search performance. Additionally, at the start of the search task for multiple targets, search performance was close to the systematic search model, while for late targets, performance approached the random search model. Observers may have changed their search strategy during this multiple-target visual search task.  相似文献   

12.
The purpose of this work is to develop and verify statistical models for protein identification using peptide identifications derived from the results of tandem mass spectral database searches. Recently we have presented a probabilistic model for peptide identification that uses hypergeometric distribution to approximate fragment ion matches of database peptide sequences to experimental tandem mass spectra. Here we apply statistical models to the database search results to validate protein identifications. For this we formulate the protein identification problem in terms of two independent models, two-hypothesis binomial and multinomial models, which use the hypergeometric probabilities and cross-correlation scores, respectively. Each database search result is assumed to be a probabilistic event. The Bernoulli event has two outcomes: a protein is either identified or not. The probability of identifying a protein at each Bernoulli event is determined from relative length of the protein in the database (the null hypothesis) or the hypergeometric probability scores of the protein's peptides (the alternative hypothesis). We then calculate the binomial probability that the protein will be observed a certain number of times (number of database matches to its peptides) given the size of the data set (number of spectra) and the probability of protein identification at each Bernoulli event. The ratio of the probabilities from these two hypotheses (maximum likelihood ratio) is used as a test statistic to discriminate between true and false identifications. The significance and confidence levels of protein identifications are calculated from the model distributions. The multinomial model combines the database search results and generates an observed frequency distribution of cross-correlation scores (grouped into bins) between experimental spectra and identified amino acid sequences. The frequency distribution is used to generate p-value probabilities of each score bin. The probabilities are then normalized with respect to score bins to generate normalized probabilities of all score bins. A protein identification probability is the multinomial probability of observing the given set of peptide scores. To reduce the effect of random matches, we employ a marginalized multinomial model for small values of cross-correlation scores. We demonstrate that the combination of the two independent methods provides a useful tool for protein identification from results of database search using tandem mass spectra. A receiver operating characteristic curve demonstrates the sensitivity and accuracy level of the approach. The shortcomings of the models are related to the cases when protein assignment is based on unusual peptide fragmentation patterns that dominate over the model encoded in the peptide identification process. We have implemented the approach in a program called PROT_PROBE.  相似文献   

13.
Taking the strong discontinuity approach as a framework for modelling displacement discontinuities and strain localization phenomena, this work extends previous results in infinitesimal strain settings to finite deformation scenarios. By means of the strong discontinuity analysis, and taking isotropic damage models as target continuum (stress–strain) constitutive equation, projected discrete (tractions–displacement jumps) constitutive models are derived, together with the strong discontinuity conditions that restrict the stress states at the discontinuous regime. A variable bandwidth model, to automatically induce those strong discontinuity conditions, and a discontinuous bifurcation procedure, to determine the initiation and propagation of the discontinuity, are briefly sketched. The large strain counterpart of a non‐symmetric finite element with embedded discontinuities, frequently considered in the strong discontinuity approach for infinitesimal strains, is then presented. Finally, some numerical experiments display the theoretical issues, and emphasize the role of the large strain kinematics in the obtained results. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

14.
This paper describes a statistical test plan to determine tensile properties and fracture properties for ductile cast iron considered for the Swedish nuclear waste canisters and associated analysis. Large variations were found in the ductility between tested canister inserts and between specimens taken from different locations in each insert. A large number of tested tensile specimens were subsequently analysed by fractography and metallography to relate low ductility values to size and type of casting defects. Loss of ductility could be related to slag defects and to a lesser extent to high pearlite content, low nodularity and chunky graphite. Slag defects were modelled by an elasto-plastic fracture mechanics model for penny-shaped slag defects and semi-empirical models for the other defect types. The fracture model was incorporated into a probabilistic scheme to compute distribution of elongation for the inserts and the associated defect size. The computed ductility distribution agrees very well with measured data whereas the computed defect size distribution is underestimated. By including crack growth resistance and various aspect ratios of defects a much better agreement with observed defects can be achieved.  相似文献   

15.
Simulation of fatigue crack growth in components with random defects   总被引:1,自引:0,他引:1  
The paper presents a probabilistic method for the simulation of fatigue crack growth from crack-like defects in the combined operating and residual stress fields of an arbitrary component. The component geometry and stress distribution are taken from a standard finite element stress analysis. Number, size and location of crack-like defects are ‘drawn’ from probability distributions. The presented fatigue assessment methodology has been implemented in a newly developed finite-element post-processor, P • FAT, and is useful for the reliability assessment of fatigue critical components. General features of the finite element post-processor have been presented. Important features, such as (i) the determination of the life-controlling defect, (ii) growth of short and long cracks, (iii) fatigue strength and fatigue life distribution and (iv) probability of component fatigue failure, have been treated and discussed. Short and long crack growth measurements have been presented and used for verification of the crack growth model presented.  相似文献   

16.
The behavior of defects (inclusions and cavities) on fatigue life of plates with fastener holes under tensile loading has been analyzed. Special attentions have been paid on the influence of the size and location of defects on fatigue life of plates with fastener holes. Thirty‐five different finite element models of plates with different size and location of defects are established and the nominal stress method is used to estimate the fatigue lives of this models. The results show that there is a region whose center is the maximum stress point of the hole without defects. When the defect is located in this region, the influence of defects on fatigue life of plates with fastener holes is obvious. When the defects are far away form this region' center, the defects hardly influence the fatigue life of plates with fastener hole. The larger the size of the defect is, the bigger this region is. In this region, the larger the size of the defect and the shorter the distance between the defect and the region's center, the shorter the fatigue life of plates with fastener hole is.  相似文献   

17.
The statistical characteristics of the time required by the crack size to reach a specified length are sought. This time is treated as the random variable time-to-failure and the analysis is cast into a first-passage time problem. The fatigue crack propagation growth equation is randomized by employing the pulse train stochastic process model. The resulting equation is stochastically averaged so that the crack size can be approximately modelled as Markov process. Choosing the appropriate transition density function for this process and setting the proper initial and boundary conditions it becomes possible to solve the associated forward Kolmogorov equation expressing the solution in the form of an infinite series. Next, the survival probability of a component, the cumulative distribution function and the probability density function of the first-passage time are determined in a series form as well. Corresponding expressions are also derived for its mean and mean square. Verification of the theoretical results is attempted through comparisons with actual experimental data and numerical simulation studies.  相似文献   

18.
19.
Closed-form normalized expressions for the field components inside a single-layer rectangular solenoid are derived from a model in which the solenoid is approximated by finite length current sheets of infinitesimal thickness. The equations are extended by superposition to include the case of a multi-layered solenoid, and the effects of nearby magnetic materials are included by employing the method of images. Computer generated field plots compare favorably with measured data.  相似文献   

20.
Stochastic models of fracture for composite materials based on the concept of damage accumulation are proposed. An assumption is used that the density of defects leading to the fracture (or to the failure in a more general sense) is sufficiently small. Asymptotic formulas for probability distributions of the damage rate, failure stress, life time and other reliability and longevity parameters are obtained. A possibility of application of the proposed stochastic models to the reliability and size effect prediction in composite materials is shown.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号