首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 828 毫秒
1.
A strain-based forming limit criterion is widely used in sheet-metal forming industry to predict necking. However, this criterion is usually valid when the strain path is linear throughout the deformation process [1]. Strain path in incremental sheet forming is often found to be severely nonlinear throughout the deformation history. Therefore, the practice of using a strain-based forming limit criterion often leads to erroneous assessments of formability and failure prediction. On the other hands, stress-based forming limit is insensitive against any changes in the strain path and hence it is first used to model the necking limit in incremental sheet forming. The stress-based forming limit is also combined with the fracture limit based on maximum shear stress criterion to show necking and fracture together. A derivation for a general mapping method from strain-based FLC to stress-based FLC using a non-quadratic yield function has been made. Simulation model is evaluated for a single point incremental forming using AA 6022-T43, and checked the accuracy against experiments. By using the path-independent necking and fracture limits, it is able to explain the deformation mechanism successfully in incremental sheet forming. The proposed model has given a good scientific basis for the development of ISF under nonlinear strain path and its usability over conventional sheet forming process as well.  相似文献   

2.
The aim of this work is to present a computationally efficient algorithm to simulate the butt curl deformation. In our previous articles [1, 2] the nonlinearities due to the viscoplastic law and the contact condition with the bottom block were solved by means of duality methods involving two multipliers. In [1] these multipliers were computed with a fixed point algorithm and in [2] with a generalized Newton’s method. In this work we improve the viscoplastic algorithm by means of a generalized duality method with variable parameters. We will present numerical results showing the applicability of the resultant algorithm to casting processes.  相似文献   

3.
The failure of quasi-brittle specimen weakened by sharp or blunted notches and cavities is analyzed under quasi-static loading. The load at failure is obtained with the Thick Level Set (TLS) damage modeling. In this model, the damage gradient is bounded implying that the minimal distance between a point where damage 0 (sound material) to 1 (fully damaged) is an imposed characteristic length in the model. This length plays an important role on the damage evolution and on the failure load. The paper shows that the TLS predictions are relevant. A comparison with the coupled criterion (CC) of Leguillon (2002) is given. A good agreement is obtained for cavities and V-notches provided that the characteristic length of Irwin is small compared to the notch depth (condition for the applicability of the CC criterion). A comparison with failure loads obtained experimentally is also given. In the numerical simulations, uniform stresses are imposed at infinity using a new finite element mapping technique (Cloirec 2005).  相似文献   

4.
We study numerical simulations of large (\({N{\approx}10^4}\)) two-dimensional quasi-static granular assemblies subjected to a slowly increasing deviator stress. We report some peculiarities in the behavior of these packings that have not yet been addressed. The number of sliding contacts is not necessarily related to stability: first the number of sliding contacts rises linearly and smoothly with the applied stress. Then, at approximately half the peak stress, the increase slows down, a plateau develops, and a decrease follows. The spatial organization of sliding contacts also changes: during the first half of the simulation, sliding contacts are uniformly distributed throughout the packing, but in the second half, they become concentrated in certain regions. This suggests that the loss of homogeneity occurs well before the appearance of shear bands. During the second half events appear where the number of sliding contacts drops suddenly, and then rapidly recovers. We show that these events are in fact local instabilities in the packing. These events become more frequent as failure is approached. For these two reasons, we call these events precursors, since they are similar to the precursors recently observed in both numerical (Staron et al. Phys Rev Lett 89:204302, 2002; Nerone et al. Phys Rev E 67:011302, 2003) and experimental (Gibiat et al. J Acoust Soc Am 123:3142, 2009; Scheller et al. Phys Rev E 74:031311, 2006; Zaitsev et al. Eur Phys Lett 83:64003, 2008; Aguirre et al. Phys Rev E 73:041307, 2006) studies of avalanches.  相似文献   

5.
This paper presents a finite element approach for modelling three-dimensional crack propagation in quasi-brittle materials, based on the strain injection and the crack-path field techniques. These numerical techniques were already tested and validated by static and dynamic simulations in 2D classical benchmarks [Dias et al., in: Monograph CIMNE No-134. International Center for Numerical Methods in Engineering, Barcelona, (2012); Oliver et al. in Comput Methods Appl Mech Eng 274:289–348, (2014); Lloberas-Valls et al. in Comput Methods Appl Mech Eng 308:499–534, (2016)] and, also, for modelling tensile crack propagation in real concrete structures, like concrete gravity dams [Dias et al. in Eng Fract Mech 154:288–310, (2016)]. The main advantages of the methodology are the low computational cost and the independence of the results on the size and orientation of the finite element mesh. These advantages were highlighted in previous works by the authors and motivate the present extension to 3D cases. The proposed methodology is implemented in the finite element framework using continuum constitutive models equipped with strain softening and consists, essentially, in injecting the elements candidate to capture the cracks with some goal oriented strain modes for improving the performance of the injected elements for simulating propagating displacement discontinuities. The goal-oriented strain modes are introduced by resorting to mixed formulations and to the Continuum Strong Discontinuity Approach (CSDA), while the crack position inside the finite elements is retrieved by resorting to the crack-path field technique. Representative numerical simulations in 3D benchmarks show that the advantages of the methodology already pointed out in 2D are kept in 3D scenarios.  相似文献   

6.
Censored data are quite common in statistics and have been studied in depth in the last years [for some references, see Powell (J Econom 25(3):303–325, 1984), Murphy et al. (Math Methods Stat 8(3):407–425, 1999), Chay and Powell (J Econ Perspect 15(4):29–42, 2001)]. In this paper, we consider censored high-dimensional data. High-dimensional models are in some way more complex than their low-dimensional versions, therefore some different techniques are required. For the linear case, appropriate estimators based on penalised regression have been developed in the last years [see for example Bickel et al. (Ann Stat 37(4):1705–1732, 2009), Koltchinskii (Bernoulli 15:799–828, 2009)]. In particular, in sparse contexts, the \(l_1\)-penalised regression (also known as LASSO) [see Tibshirani (J R Stat Soc Ser B 58:267–288, 1996), Bühlmann and van de Geer (Statistics for high-dimensional data. Springer, Heidelberg, 2011) and reference therein] performs very well. Only few theoretical work was done to analyse censored linear models in a high-dimensional context. We therefore consider a high-dimensional censored linear model, where the response variable is left censored. We propose a new estimator, which aims to work with high-dimensional linear censored data. Theoretical non-asymptotic oracle inequalities are derived.  相似文献   

7.
The crystallization from solution of an active pharmaceutical ingredient requires the knowledge of the solubility in the entire temperature range investigated during the process. However, during the development of a new active ingredient, these data are missing. Its experimental determination is possible, but tedious. UNIFAC Group contribution method Fredenslund et al. (Vapor–liquid equilibria using UNIFAC: a group contribution method, 1977; AIChE J 21:1086, 1975) can be used to predict this physical property. Several modifications on this model have been proposed since its development in 1977, modified UNIFAC of Dortmund Weidlich et al. (Ind Eng Chem Res 26:1372, 1987), Gmehling et al. (Ind Eng Chem Res 32:178, 1993), Pharma-modified UNIFAC Diedrichs et al. (Evaluation und Erweiterung thermodynamischer Modelle zur Vorhersage von Wirkstofflöslichkeiten, PhD Thesis, 2010), KT-UNIFAC Kang et al. (Ind Eng Chem Res 41:3260, 2002), \(\ldots \) In this study, we used UNIFAC model by considering the linear temperature dependence of interaction parameters as in Pharma-modified UNIFAC and structural groups as defined by KT-UNIFAC first-order model. More than 100 binary datasets were involved in the estimation of interaction parameters. These new parameters were then used to calculate activity coefficient and solubility of some molecules in various solvents at different temperatures. The model gives better results than those from the original UNIFAC and shows good agreement between the experimental solubility and the calculated one.  相似文献   

8.
Inference procedures based on the minimization of divergences are popular statistical tools. Beran (Ann stat 5(3):445–463, 1977) proved consistency and asymptotic normality of the minimum Hellinger distance (MHD) estimator. This method was later extended to the large class of disparities in discrete models by Lindsay (Ann stat 22(2):1081–1114, 1994) who proved existence of a sequence of roots of the estimating equation which is consistent and asymptotically normal. However, the current literature does not provide a general asymptotic result about the minimizer of a generic disparity. In this paper, we prove, under very general conditions, an asymptotic representation of the minimum disparity estimator itself (and not just for a root of the estimating equation), thus generalizing the results of Beran (Ann stat 5(3):445–463, 1977) and Lindsay (Ann stat 22(2):1081–1114, 1994). This leads to a general framework for minimum disparity estimation encompassing both discrete and continuous models.  相似文献   

9.
Numerical simulations based on the bifurcation and imperfection versions of the strain localization theory are used in this paper to predict the failure loci of metals and applied to an advanced high strength steel subjected to proportional loading paths. The results are evaluated against the 3D unit cell analyses of Dunand and Mohr (J Mech Phys Solids 66(1):133–153, 2014. doi: 10.1016/j.jmps.2014.01.008) available in the literature. The Gurson porous plasticity model (Gurson in J Eng Mater Technol 99(1):2–15, 1977. doi: 10.1115/1.344340) is used to induce strain softening and drive the localization process. The effects of the void growth, void nucleation and void softening in shear are investigated over a large range of stress triaxialities and Lode parameters. A correlation between the imperfection and bifurcation results is established.  相似文献   

10.
The argon triple point (\(T_{90} = 83.8058\,\hbox {K}\)) is a fixed point of the International Temperature Scale of Preston-Thomas (Metrologia 27:3, 1990). Cells for realization of the fixed point have been manufactured by several European metrology institutes (Pavese in Metrologia 14:93, 1978; Pavese et al. in Temperature, part 1, American Institute of Physics, College Park, 2003; Hermier et al. in Temperature, part 1, American Institute of Physics, College Park, 2003; Pavese and Beciet in Modern gas-based temperature and pressure measurement, Springer, New York, 2013). The Institute of Low Temperature and Structure Research has in its disposal a few argon cells of various constructions used for calibration of capsule-type standard platinum resistance thermometers (CSPRT) that were produced within 40 years. These cells differ in terms of mechanical design and thermal properties, as well as source of gas filling the cell. This paper presents data on differences between temperature values obtained during the realization of the triple point of argon in these cells. For determination of the temperature, a heat-pulse method was applied (Pavese and Beciet in Modern gas-based temperature and pressure measurement, Springer, New York, 2013). The comparisons were performed using three CSPRTs. The temperatures difference was determined in relation to a reference function \(W(T)=R(T_{90})/R(271.16\hbox {K})\) in order to avoid an impact of CSPRT resistance drift between measurements in the argon cells. Melting curves and uncertainty budgets of the measurements are given in the paper. A construction of measuring apparatus is also presented in this paper.  相似文献   

11.
In this study, we propose a procedure for simultaneous testing \(l (l\ge 1)\) linear relations on \(k(k\ge 2)\) high-dimensional mean vectors with heterogeneous covariance matrices, which extends the result derived by Nishiyama et al. (J Stat Plan Inference 143(11):1898–1911, 2013) and does not need the normality assumption. The newly proposed test statistic is motivated by Bai and Saranadasa (Statistica Sinica 6(2):311–329, 1996) and Chen and Qin (Ann Stat 38(2):808–835, 2010). As a special case, our result could be applied to multivariate analysis of variance, that is, testing the equality of k high-dimensional mean vectors.  相似文献   

12.
João Lita da Silva 《TEST》2018,27(2):477-495
In one-dimensional regression models, we establish a rate for the rth moment convergence \((r \geqslant 1)\) of the ordinary least-squares estimator involving explicitly the regressors, answering to an open question raised lately by Afendras and Markatou (Test 25:775–784, 2016). An extension of the classic Theorem 2.6.1 of Anderson (The statistical analysis of time series, Wiley, New York, 1971) is also presented.  相似文献   

13.
This work incorporates measurement uncertainty estimation into the model framework proposed by dos Santos and Brandi (Clean Technol Environ Policy, doi: 10.1007/s10098-015-0919-8, 2015). It brings the metrology science procedures to sustainability situations by incorporating the use of the GUM framework (GUF) together with the Monte Carlo method (MCM) (BIPM, Evaluation of measurement data—guide to the expression of uncertainty in measurement 2008a; Evaluation of measurement data—Supplement 1 to the “Guide to the expression of uncertainty in measurement”—propagation of distributions using a Monte Carlo method 2008b). The GUF uses the law of propagation of uncertainties and the MCM the propagation probability distributions. This scheme is applied to analyze the Integration and Logistic Infrastructure sustainability dimension of a biofuel supply chain in six countries (Santos et al. 2014). An initial set of specific indicators (input quantities) satisfying well-established criteria is used to aggregate indicators in a methodological manner into a single aggregate indicator. The Canberra and the normalized Euclidean distances are assumed as model functions. As recommended by the GUM approach, Supplement 1 (BIPM, Evaluation of measurement data—Supplement 1 to the “Guide to the expression of uncertainty in measurement”—propagation of distributions using a Monte Carlo method 2008b) is used to compare the GUF and the MCM results, adopting the GUM recommendation to perform the MCM with 106 random trials. This allows the determination of the numerical statistical results with the precision level required for comparing the sustainability level of the six countries. It was shown that the use of the GUF is not validated to treat the adopted model functions. The two fundamental reasons are the limitation of the GUF concerning the truncation of the Taylor’s expansion and the deviation of the probability density function from the normal distribution (BIPM, Evaluation of measurement data—Supplement 1 to the “Guide to the expression of uncertainty in measurement”—propagation of distributions using a Monte Carlo method 2008b; Couto et al., Theory and applications of Monte Carlo simulations 2013). This result was predictable because of the nonlinear dependence on the indicators of the Canberra and the normalized Euclidean distances. The MCM calculations have shown that the uncertainties depend on the choice of the aggregate metrics, consequently affecting the countries sustainability ranking. The results demonstrate that Canberra and the Euclidean metrics separate the developed from the developing countries in clusters. The calculations for the single sustainability indicator and its uncertainty suggest that the Euclidean distance separates the countries better than the Canberra distance and, thus, it may be considered more adequate to represent the sustainability metrics Integration and Logistic Infrastructure sustainability dimension of a biofuel supply chain.  相似文献   

14.
The main objective to guarantee a high efficiency in the press shop is to produce sheet metal parts without failure. The feasibility of sheet metal parts is nowadays ensured during the development process by a comparison of the occurring strains in the simulation with the Forming Limit Diagram (FLD). The principle of the experimental procedure to determine the FLD is standardized in ISO 12004–2 [1]. This procedure is only valid with high accuracy for proportional unbroken strain paths. However, in most industrial forming operations non-linear strain paths occur. To resolve this problem, a phenomenological approach was introduced by Volk [2], the so-called Generalized Forming Limit Concept (GFLC). Localized necking and the remaining formability for any arbitrary non-linear strain path can be predicted with the GFLC. Furthermore, experimental investigation of multi-linear strain paths appears highly complex in practice and involves a range of testing equipment, e.g. different specimens, testing machines and tools. In this paper an alternative method is introduced by using a cruciform specimen and a draw bead tool on a sheet metal testing machine. The different draw bead heights allow the creation of arbitrary strain states, which can be changed at different height of the punch. Conventionally cruciform specimens are used to determine the yield loci in the first quadrant of the stress space at low strain values. To enable a cruciform specimen for the evaluation of strain limits comparable to the conventional Nakajima test, an optimization of the geometry regarding the maximum achievable strains in the specimen center takes place. The developed specimen and tool allow testing of materials under multi-axial strain states with a reduced testing effort.  相似文献   

15.
A ‘Sleeping beauty’ is a term used to describe a research article that has remained relatively uncited for several years and then suddenly blossoms forward. New technology now allows us to detect such articles more easily than before, and sleeping beauties can be found in numerous disciplines. In this article we describe three sleeping beauties that we have found in psychology—Stroop (J Exp Psychol 18:643–662, 1935), Maslow (Psychol Rev 50(4):370–396, 1943), and Simon (Psychol Rev 63(2):129–138, 1956).  相似文献   

16.
During the past decades several inverse approaches have been developed to identify the stress-crack opening (\({\sigma }-w\)) by means of indirect test methods, such as the notched three point bending-, wedge splitting-, and round panel testing. The aim is to establish reliable constitutive models for the tensile behavior of fiber reinforced concrete materials, suitable for structural design. Within this context, the adaptive inverse analysis (AIA) was recently developed to facilitate a fully general and automatized inverse analysis scheme, which is applicable in conjunction with analytical or finite element simulation of the experimental response. This paper presents a new formulation of the adaptive refinement criterion of the AIA method. The paper demonstrates that the refinement criterion of the nonlinear least square curve fitting process, is significantly improved by coupling the model error to the crack mouth opening and the crack opening displacement relationship (\(w_{\mathrm{cmod}}-w_{\mathrm{cod}}\)). This enables an adaptive refinement of the \({\sigma }-w\) model in the line segment with maximum model error, which entails significant improvement of the numerical efficiency of the AIA method without any loss of robustness. The improved method is applied on various fiber reinforced concrete composites and the results are benchmarked with the inverse analysis method suggested by the Japanese Concrete Institute (Method of test for fracture energy of concrete by use of notched beam, Japanese Concrete Institute Standard, Tokyo, 2003) and recently adopted in ISO 19044 (Test methods for fibre-reinforced cementitious composites—load-displacement curve using notched specimen, 2015). The benchmarking demonstrates that the AIA method, in contradiction to the JCI/ISO method, facilitates direct determination of the tensile strength and operational multi-linear \({\sigma }-w\) models.  相似文献   

17.
Okasha et al. (J Failure Anal Prevention, 2017. doi: 10.1007/s11668-017-0263-x) introduced the novel Topp–Leone geometric distribution. Here, we introduce a class of distributions containing [32]’s distribution as a particular case. The class of distributions contains several important distributions, including the Topp–Leone geometric, Topp–Leone Poisson, Topp–Leone logarithmic, Topp–Leone binomial and Topp–Leone negative binomial distributions. We derive comprehensive mathematical properties of the class. We obtain closed form expressions for the density function, cumulative distribution function, survival and hazard rate functions, moments, mean residual lifetime, mean past lifetime, order statistics and moments of order statistics. The class is shown to be more flexible by reanalyzing the real data set in [32].  相似文献   

18.
The present work demonstrates implementation of a mass-conserving sharp-interface immersed boundary for simulation of flows in branched arterial geometries. A simplistic two-dimensional arterial junction is considered to capture the preliminary flow physics in the aortic regions. Numerical solutions are benchmarked against established available experimental PIV results in Ensley et al (Annu. Thorac. Surg. 68(4):1384–1390, 1999) and numerical predictions in Gilmanov and Sotiropoulos (J. Comput. Phys. 207(2):457–492, 2005) and de Zelicourt et al (Comput. Fluids 38(9):1749–1762, 2009). Simulations are further carried out for pulsated flows and effects of blockages near the junctions (due to stenosis or atherosclerosis). Instabilities in the flow structures near the junction and the resulting changes in the downstream pulsation frequency were observed. These changes account for the physiological heart defects that arise from the poorly working valve (due to blockage), giving rise to chest pain and breathing instability, and can potentially be used as a detection tool for arterial diseases.  相似文献   

19.
Rolled sheet metal alloys exhibit plastic anisotropy, which leads to the formation of ears during the deep drawing process. An analytical function proposed by Yoon et al. (Int J Plast 27(8):1165–1184, 2011) predicts earing profile based on yield stress and r value directionalities for circular cup drawing. In this study, this analytical approach is applied for a deep drawing of Ti-6Al-4V at elevated temperatures up to 400 °C. Three yield criteria namely, Hill 1948, Barlat 1989 and Barlat Yld2000-2d are used to obtain the directionality inputs for the analytical formula. The analytical model is validated using experimental results and FE simulations and is found to be closely matched while requiring very less CPU time. FE simulation has been also conducted with various yield functions. Barlat Yld2000-2d is considered to be the most suitable yield criterion for very accurate earing prediction in deep drawing of Ti-6Al-4V as the inputs for both the analytical and FEM models.  相似文献   

20.
Hidetoshi Murakami 《TEST》2016,25(4):674-691
When testing hypotheses in two-sample problems, the Lepage test has often been used to jointly test the location and scale parameters, and has been discussed by many authors over the years. The Lepage test was a combination of the Wilcoxon statistic and the Ansari–Bradley statistic. Various Lepage-type tests were proposed with discussions of an asymptotic relative efficiency (Duran et al., Biometrika 63:173–176, 1976; Goria, Stat Neerl 36:3–13, 1982), a robustness and a power comparison (Neuhäuser, Commun Stat Theory Methods 29:67–78, 2000; Büning, J Appl Stat 29:907–924, 2002) and an adaptive test (Büning and Thadewald, J Stat Comput Sim 65:287–310, 2000). We derive an expression for the moment generating function of a linear combination of two linear rank statistics. As a suggested Lepage-type test, we use a combination of the generalized Wilcoxon statistic and the generalized Mood statistic. Deriving the exact critical value of the statistic can be difficult when the sample sizes are increasing. In this situation, an approximation method to the distribution function of the test statistic can be useful with a higher order moment. We use a moment-based approximation with an adjusted gamma polynomial to evaluate the upper tail probability of a Lepage-type test for a finite sample size. We determine the asymptotic efficiencies of the Lepage and Lepage-type tests for various distributions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号