首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Significant advances have taken place in developing methods of analysis of creep and creep rupture under varying stress conditions. However, the attention so far has been focussed on studying creep under deterministic stress histories. Time dependent stress and environmental history of a typical structure element is often definitely not known and it can, at best be described in statistical terms. Idealizing creep as a Markoff process and considering strain hardening theory, a simplified theory is proposed to characterize creep strain under random loading. The random loading refers to statically determinate uniaxial stressing, changing in a stepwise manner with time only and forms an ergodic process. In the light of the feasibility of characterizing the strain distribution under the random loading, an indication is given how a reliability based design approach to creep problems can be developed. The theory and its application are illustrated by a numerical example.  相似文献   

2.
Abstract:   The efficiency of many eigensolution strategies is affected by the selection of the starting vector. Poor initializations often result in slow convergence, and in certain instances may lead to an incorrect or irrelevant answer. The problem of selecting an appropriate starting vector becomes even more complicated when the structure involved is characterized by properties that are random in nature. Here, a good initialization for one sample could be poor for another sample. Thus, the proper eigenvector initialization for uncertainty analysis involving Monte Carlo simulations is essential for efficient random eigenvalue analysis. Most simulation procedures to date have been sequential in nature, that is, a random vector to describe the structural system is simulated, a FE analysis is conducted, the response quantities are identified by post-processing, and the process is repeated until the standard error in the response of interest is within desired limits. A different approach is to generate all the sample (random) structures prior to performing any FE analysis, sequentially rank order them according to some appropriate measure of distance between the realizations, and perform the FE analyses in similar rank order, using the results from the previous analysis as the initialization for the current analysis. The sample structures may also be ordered into a tree-type data structure, where each node represents a random sample, the traverse of the tree starts from the root of the tree until every node in the tree is visited exactly once. This approach differs from the sequential ordering approach in that it uses the solution of the "closest" node to initialize the iterative solver. The computational efficiencies that result from such orderings (at a modest expense of additional data storage) are demonstrated through a stability analysis of a system with closely spaced buckling loads and the modal analysis of a simply supported beam.  相似文献   

3.
Discontinuity shear strength plays a critical role in many problems encountered in rock engineering, especially in the design of rock slopes. Since its precise estimation is generally not possible, it is crucial that the errors and uncertainties associated with its estimate be quantified and reflected in the design procedure. In this study, the uncertainties underlying discontinuity shear strength are thoroughly examined and an uncertainty analysis model is developed for the estimation of in situ discontinuity shear strength with a special emphasis on rock slopes. An extensive literature survey on shear behavior of unfilled rock discontinuities has been carried out and the necessary data for the quantification of uncertainties are extracted from this survey. These uncertainties stem from the discrepancies between laboratory-measured and in situ discontinuity shear strength values, as well as from the inherent variability of shear strength within a rock medium. The main causes of discrepancies, namely, scale, anisotropy and water saturation are considered. For each source of discrepancy a correction factor, treated as a random variable, is assigned and guidelines for the quantification of the statistical parameters of these correction factors are presented within the framework of the proposed uncertainty analysis model. The proposed uncertainty model provides an analytical tool for the systematic treatment of uncertainties involved in the estimation of the in situ value of peak friction angle from the laboratory test results.  相似文献   

4.
The event oriented analysis of technical objects is in general accomplished by representing them as complete or incomplete systems and subsystems of events. It is argued in the article how the compound engineering systems of events can be partitioned by inclusion-exclusion expansion into individual and common cause modes. The event analysis is based on the random variable model and employs the results of operational modes and effect analysis, of the reliability analysis and of the uncertainty analysis. The system redundancy and robustness are considered as uncertainties, due to the fact that really a number of events are possible, expressed by the entropy concept in probability theory, conditioned by operational and failure modes, respectively. Relative and average uncertainty measures are introduced to facilitate uncertainty interpretations in engineering problems. It is investigated how the sensitivity analysis of reliability measures can be applied to the assessments of system uncertainties. Numerical examples presented in the article illustrate the application of event oriented system analysis to series structural systems with common cause failures. Additionally, system performance presentation and optimization with constraints, as well as potential improvements in system analysis, design and maintenance are investigated.  相似文献   

5.
Methods customarily applied in situations involving uncertainties, are shown to have important ramifications on the selection of extreme values used in the design of structural elements subjected to environmental load processes. The more practical choice of uncertainty-free environmental, rather than load effect values as basic design parameters, is investigated. It is suggested that the influence of physical uncertainties be anticipated by judiciously increasing the return period of the environmental extreme value; model uncertainty can be taken into account by applying an appropriate safety factor to the corresponding design load effect.  相似文献   

6.
《Water research》1987,21(9):1135-1142
The influence of bioactivities, solid/solution ratio and the pH on the isotopic exchangeability of phosphate in a freshwater sediment was investigated. From the comparison of the results obtained for the same sample in the presence or absence of formaldehyde, it is concluded that microorganisms can effect the analysis for isotopically exchangeable phosphate. Irradiation with u.v.-light caused a sharp rise in isotopic exchangeability. In the pH-range 6.6–8.4 isotopic exchangeability of phosphate increases with decreasing pH-value which is attributed to an easier exchange of H2PO4 than of HPO42−. The influence of the solid/solution ratio on the isotopic exchangeability of phosphate in the solid phase is small or nil. However, because of the relatively large amount of phosphate that goes into solution, the total isotopic exchangeability Ei of phosphate in the solid and liquid phase together is strongly increased at a low solid/solution ratio. From these results it is concluded that in order to make a meaningful comparison of isotopically exchangeable phosphate in different soils or sediments it is essential to work at a nearly constant pH and solid/solution ratio. Such a comparison was made for 26 freshwater sediments from the Rhine/Meuse delta, in the presence and absence of 0.17 mol l−1 formaldehyde as a biological inhibitor and it is concluded that addition of the latter is essential. The lowest total isotopic exchangeabilities of phosphate, (15–25) were measured in the sediments collected from the Haringvliet, whereas higher values (40–80) were found in the sediments from the Brielse Meer and the Grote Rug. This could well be indicative of a similar variance in the biological availability of the phosphate in the investigated sediments.  相似文献   

7.
Reliability sensitivity method by line sampling   总被引:5,自引:0,他引:5  
Reliability sensitivity refers to the derivative of the failure probability with respect to the distribution parameter of basic random variable. Conventionally, this requires repetitive evaluations of the failure probability for different distribution parameters, which is a direct but computationally expensive task. An efficient simulation algorithm is presented to perform reliability sensitivity analysis using the line sampling technique, which gives a good failure probability evaluation for high-dimensional problems and still presents a comparative one for low-dimensional problems. On the basis of the line sampling procedure for failure probability analysis, the concept and implementation are presented for reliability sensitivity. It is shown that the desired information about reliability sensitivity can be obtained by a very limited increase of computation effort based on the failure probability analysis by the line sampling technique. The presented reliability sensitivity algorithm is more efficient than the one based on the direct Monte Carlo technique, especially for cases where the failure probability is low and the number of random variables is large, which is illustrated by several examples. Additionally, limitations of the line sampling based reliability sensitivity method are demonstrated by a numerical example as well.  相似文献   

8.
ISO 7730 Standard classifies thermal environments in three categories as a function of the PMV range value, gradually decreasing according to the need of a lower dissatisfied percentage. It is noteworthy that the PMV value is greatly affected by the changes of its independent variables (air temperature, mean radiant temperature, air velocity, relative humidity, metabolic rate and clothing insulation); therefore the accuracy requirements of sensors for the measurement of environmental quantities as well the assessment of other parameters related to the activity and clothing appear a crucial matter. This work deals with a sensitivity analysis of PMV index to the accuracy of its six independent variables. Obtained results clearly show that the widths of PMV ranges fixed for each class in 7730 are near to the PMV uncertainty related to measuring devices accuracy, making often the environment classification a random operation.  相似文献   

9.
This paper describes the diagnosis phase of a highway safety expert system. The overall objective of the expert system is to provide highway safety officials with an efficient tool to identify accident prone locations and then quickly and reliably advise on the appropriate countermeasure(s) based on an analysis of the accident and roadway environment data. The system has three basic phases: detection, diagnosis and remedy. In the diagnosis phase a knowledge-based system is developed to identify the causes and the contribution factors of safety problems at accident prone locations and to suggest appropriate countermeasures. It is shown that the knowledge-based approach best-suits the diagnosis process since it involves a great deal of judgment and experience by the safety engineer. The paper describes different steps involved in developing the diagnosis phase including: knowledge acquisition, problem solving strategy, system features, uncertainty handling, and system verification and validation. The output of the diagnosis phase is a set of applicable countermeasures for each accident prone location and the degree of belief in each countermeasure. The knowledge-based system was validated using several case studies which demonstrated satisfactory results.  相似文献   

10.
Evacuation life safety in a one-room public assembly building has been analysed with regard to uncertainty and risk. Limit state equations have been defined, using response surface approximations of output from computer programmes. A number of uncertainty analysis procedures have been employed and compared: the analytical first-order second-moment (FOSM) method, two numerical random sampling procedures (simple random sampling and Latin hypercube sampling) and standard PRA method. Eight scenarios have been analysed in isolation as well as aggregated into an event tree, with branches denoting functioning/failing protection system (alarm, sprinkler and emergency door). Input parameter distributions have been subjectively quantified and classified with respect to category: knowledge or stochastic uncertainty. Risk assessment results comprise probability of failure pf, reliability index β and CCDF (complementary cumulative distribution function) for evacuation time-margin deficit. Of special interest is the calculation of confidence intervals for the distribution of CCDFs obtained by the two-phase Monte Carlo sampling procedure, allowing a distinction between knowledge and stochastic uncertainty. The importance analysis carried out analytically gives data of fundamental significance for an understanding of the practical design problem. Partial coefficients have been treated only by calculating values implicit or inherent in a few existing sample design configurations. Future studies, preferrably using optimization procedures, are needed to produce generally valid values.  相似文献   

11.
《Energy and Buildings》1999,30(1):61-71
The goal of performing sensitivity analysis of a simulation model is to determine the effect of input variation and the effect of input uncertainty on the output data. Sensitivity analysis is an unavoidable step in model validation, and it is also generally useful in performing simulations. The user must know the influence of the accuracy of the data that is input to the program. This paper presents a methodology for performing sensitivity analysis as well as the tools that implement this methodology, MISA and LiSA, that were developed within the IEA-ECBCS Annex 23 `Multizone air flow modeling'. The basic concepts of sensitivity analysis as well as the main characteristics of the developed tool are presented. More detailed information are available in the final report of the sub-task 3 `Evaluation of COMIS' of the Annex 23 [J.-M. Fürbringer, C.-A. Roulet, R. Borchiellini, Evaluation of COMIS, final report IEA.ECB&CS Annex 23 Multizone Air flow Modelling, LESO-PB, EPFL, 1015 Lausanne, Switzerland, 1996.].  相似文献   

12.
This paper explores the wind stochastic field from a new viewpoint of stochastic Fourier spectrum (SFS). The basic random parameters of the wind stochastic field, the roughness length z0 and the mean wind velocity at 10 m height U10, as well as their probability density functions (PDF), are obtained. It provides opportunities to use probability density evolution method (PDEM), which had been proved to be of high accuracy and efficiency, in computing the dynamic response and reliability of tall buildings subject to the wind loading. Principals and corresponding numerical solving algorithm of the PDEM are first presented. Then, the adopted model of the wind stochastic field is described briefly. The simulation method of the fluctuating wind velocity based on the SFS is introduced. Finally, as an example of the application of the PDEM, a 20-storey frame subject to wind loading is investigated in detail. The responses, including the mean value and the standard deviation, and the reliabilities of the frame are evaluated by the PDEM. The results demonstrate that the PDEM is applicable and efficient in the dynamic response and reliability analysis of wind-excited tall building.  相似文献   

13.
Abstract: The modeling of in‐service behavior is of first importance when reassessing complex structures like harbor structures and when performing risk analysis. To this aim, the monitoring of structures allows assessment of the level of loading and to provide more realistic models for mechanical behavior or input values for their parameters. Moreover, for complex structures and due to building hazards, a stochastic modeling is needed to represent the large scatter of measured quantities. In this article, a step‐by‐step procedure for structural identification is presented. A decomposition of random variables on Polynomial Chaos is selected and it is shown to represent better the basic variables in comparison to preselected distribution functions, when considering maximum likelihood estimate. The decomposed variables are used for a stochastic analysis to be further updated with available monitoring data. The model can be used to follow the structure behavior during in‐service or extreme conditions and to perform a reliability analysis. The proposed procedure will be carried out by using available data from the monitoring of a pile‐supported wharf in the Port of Nantes, in France, but it can be generalized to similar monitored structures.  相似文献   

14.
The selection criteria for Euler-Bernoulli or Timoshenko beam theories are generally given by means of some deterministic rule involving beam dimensions. The Euler-Bernoulli beam theory is used to model the behavior of flexure-dominated (or “long”) beams. The Timoshenko theory applies for shear-dominated (or “short”) beams. In the mid-length range, both theories should be equivalent, and some agreement between them would be expected. Indeed, it is shown in the paper that, for some mid-length beams, the deterministic displacement responses for the two theories agrees very well. However, the article points out that the behavior of the two beam models is radically different in terms of uncertainty propagation. In the paper, some beam parameters are modeled as parameterized stochastic processes. The two formulations are implemented and solved via a Monte Carlo-Galerkin scheme. It is shown that, for uncertain elasticity modulus, propagation of uncertainty to the displacement response is much larger for Timoshenko beams than for Euler-Bernoulli beams. On the other hand, propagation of the uncertainty for random beam height is much larger for Euler beam displacements. Hence, any reliability or risk analysis becomes completely dependent on the beam theory employed. The authors believe this is not widely acknowledged by the structural safety or stochastic mechanics communities.  相似文献   

15.
The assessment of the dynamic or seismic performance of complex structures often requires the integration in the time domain of the structural equation of motion in the frame of a nonlinear analysis. Although sophisticated methods have been developed for the nonlinear analysis of masonry wall structures, including the macro- and micro-modeling approaches, these require large computational effort still limiting the extent and complexity of the structures analyzed. This paper presents an alternative method based on the Generalized Matrix Formulation for masonry skeletal structures and load bearing wall systems, which has been proved as an efficient formulation for the analysis of the strength capacity of these kinds of structures (Roca et al. (2005) [17]). The basic formulation has been complemented with a uniaxial cyclic constitutive model for masonry and a time integration scheme. The ability of the resulting approach to predict the nonlinear dynamic response of masonry structures is shown through its application to the time domain analysis of an experimental scale masonry building with available experimental results on its dynamic response.  相似文献   

16.
依据统筹法的基本原理和预算工程量计算规则,提出了供暖工程施工图预算编制的工程量计算顺序、编制中应注意的问题和计算公式以及预算定额子目基价叠加方法,运用此技巧,可提高预算编制、审核工效和质量.  相似文献   

17.
Uncertainty propagation in probabilistic seismic loss estimation   总被引:2,自引:1,他引:1  
Probabilistic estimation of losses in a building due to earthquake damage is a topic of interest to decision makers and an area of active research. One promising approach to the problem, proposed by the Pacific Earthquake Engineering Research (PEER) Center, involves breaking the analysis into separate components associated with ground motion hazard, structural response, damage to components and repair costs. Each stage of this method has both inherent (aleatory) randomness and (epistemic) model uncertainty, and these two sources of uncertainty must be propagated through the analysis in order to determine the total uncertainty in the resulting loss estimates. In this paper, the PEER framework for seismic loss estimation is reviewed and options for both characterizing and propagating the various sources of uncertainty are proposed. Models for correlations (among, e.g., element repair costs) are proposed that may be useful when empirical data is lacking. Several options are discussed for propagating uncertainty, ranging from flexible but expensive Monte Carlo simulation to closed form solutions requiring specific functional forms for relationships between variables to be assumed. A procedure that falls between these two extremes is proposed, which integrates over the discrete element damage states, and uses the first-order second-oment method to collapse several conditional random variables into a single conditional random variable representing total repair cost given the ground motion intensity. Numerical integration is then used to incorporate the ground motion hazard. Studies attempting to characterize epistemic uncertainty or develop specific elements of the framework are referenced as an aid for users wishing to implement this loss-estimation procedure.  相似文献   

18.
基于非侵入式随机有限元法的地下洞室可靠度分析   总被引:3,自引:0,他引:3  
提出了地下洞室变形可靠度分析的非侵入式随机有限元法。介绍了随机多项式展开的基本原理,采用 GEOSLOPE 的 SIGMA/W 模块进行确定性有限元分析。提出了随机多项式展开与 SIGMA/W 模块接口方法及其流程图,从而实现了确定性有限元分析和随机分析一体化。最后研究了非侵入式随机有限元法在地下洞室变形可靠度分析中的应用。结果表明:非侵入式随机有限元法使得随机分析与确定性有限元分析互不耦合,其计算效率是传统的蒙特卡罗模拟方法无可比拟的,它是地下洞室变形可靠度问题分析一种有效的方法。采用二次衬砌支护是提高地下洞室可靠度有效的方法。此外,岩体变形模量的变异性对地下洞室变形可靠度有非常明显的影响,而岩体重度的变异性对可靠度基本上没有影响。因此,在地质勘查中要尽可能准确地确定岩体的变形模量,从而有效地提高地下洞室变形可靠度。  相似文献   

19.
《Energy and Buildings》1987,10(2):135-150
A Markovian stochastic approach to the simulation of passive and hybrid solar devices has been developed. The driving variables of the modelled device, as well as the temperatures which characterize its thermal state, are discretized. These quantities are used to set up a random state vector, which defines a discrete homogeneous Markov chain. The transition probabilities of the chain are calculated using the stochastic matrices which are obtained by reduction of weather and users' dependent real sequences of data into the approapriate form. Determination of the long-run distribution of probability of the chain provides for the evaluation of energy performance and thermal comfort indicators. An intermodel comparison, using several internationally well-known deterministic computer programs (DYWON, HELIOS, PASSIM, SERIRES), has been carried out by way of the simulation of a direct-gain office room. This analysis has shown close correspondence between the stochastic and deterministic modelling results. Among others, the main advantage of this approach is the ability of the model to account for the random fluctuations of the driving variables, which affect the thermal performances of the solar device. The development of microcomputer programs based on this approach is regarded as the final goal of this research topic.  相似文献   

20.
One of the best approaches to date to obtain overall binding constants (Ko) for Al and dissolved organic matter (DOM) from acidic soil solutions is to collect 'free' Al data with diffusive gradients in thin films (DGT) and to infer the Ko values by fitting a continuous distribution model based on Scatchard plots. Although there is clear established literature demonstrating the usefulness of the Scatchard approach, relatively little attention has been given to a realistic assessment of the uncertainties associated with the final fitted Ko values. In this study we present an uncertainty analysis of the fitted Ko values using a synthetic dataset with different levels of random noise and a real data set using DGT data from an acidic soil solution. The parameters in the continuous distribution model and their corresponding upper and lower 95% uncertainty bounds were determined using the Shuffled Complex Evolution Metropolis (SCEM) algorithm. Although reasonable fits of the distribution model to the experimental data were obtained in all cases, an appreciable uncertainty in the resulting Ko values was found due to three main reasons. Firstly, obtaining 'free' Al data even with the DGT method is relatively difficult, leading to uncertainty in the data. Secondly, before Scatchard plots can be constructed, the maximum binding capacity (MBC) must be estimated. Any uncertainty in this MBC propagates into uncertainty associated with the final plots. Thirdly, as the final fitted Ko values are largely based on extrapolation, a small uncertainty in the fit of the binding data results in an appreciable uncertainty in the obtained Ko. Therefore, while trends in Ko for Al and DOM could easily be discerned and compared, the uncertainty in the Ko values hinders the application in quantitative speciation calculation. More comprehensive speciation models that avoid the use of Ko seem to fit better for this purpose.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号