首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Spatial analysis technique has been introduced as an innovative approach for hazardous road segments identification (HRSI). In this study, the performance of two spatial analysis methods and four conventional methods for HRSI was compared against three quantitative evaluation criteria. The spatial analysis methods considered in this study include the local spatial autocorrelation method and the kernel density estimation (KDE) method. It was found that the empirical Bayesian (EB) method and the KDE method outperformed other HRSI approaches. By transferring the kernel density function into a form that was analogous to the form of the EB function, we further proved that the KDE method can eventually be considered a simplified version of the EB method in which crashes reported at neighboring spatial units are used as the reference population for estimating the EB-adjusted crashes. Theoretically, the KDE method may outperform the EB method in HRSI when the neighboring spatial units provide more useful information on the expected crash frequency than a safety performance function does.  相似文献   

2.
This study proposes a Bayesian spatio-temporal interaction approach for hotspot identification by applying the full Bayesian (FB) technique in the context of macroscopic safety analysis. Compared with the emerging Bayesian spatial and temporal approach, the Bayesian spatio-temporal interaction model contributes to a detailed understanding of differential trends through analyzing and mapping probabilities of area-specific crash trends as differing from the mean trend and highlights specific locations where crash occurrence is deteriorating or improving over time. With traffic analysis zones (TAZs) crash data collected in Florida, an empirical analysis was conducted to evaluate the following three approaches for hotspot identification: FB ranking using a Poisson-lognormal (PLN) model, FB ranking using a Bayesian spatial and temporal (B-ST) model and FB ranking using a Bayesian spatio-temporal interaction (B-ST-I) model. The results show that (a) the models accounting for space-time effects perform better in safety ranking than does the PLN model, and (b) the FB approach using the B-ST-I model significantly outperforms the B-ST approach in correctly identifying hotspots by explicitly accounting for the space-time variation in addition to the stable spatial/temporal patterns of crash occurrence. In practice, the B-ST-I approach plays key roles in addressing two issues: (a) how the identified hotspots have evolved over time and (b) the identification of areas that, whilst not yet hotspots, show a tendency to become hotspots. Finally, it can provide guidance to policy decision makers to efficiently improve zonal-level safety.  相似文献   

3.
Reliability growth tests are often used for achieving a target reliability for complex systems via multiple test‐fix stages with limited testing resources. Such tests can be sped up via accelerated life testing (ALT) where test units are exposed to harsher‐than‐normal conditions. In this paper, a Bayesian framework is proposed to analyze ALT data in reliability growth. In particular, a complex system with components that have multiple competing failure modes is considered, and the time to failure of each failure mode is assumed to follow a Weibull distribution. We also assume that the accelerated condition has a fixed time scaling effect on each of the failure modes. In addition, a corrective action with fixed ineffectiveness can be performed at the end of each stage to reduce the occurrence of each failure mode. Under the Bayesian framework, a general model is developed to handle uncertainty on all model parameters, and several special cases with some parameters being known are also studied. A simulation study is conducted to assess the performance of the proposed models in estimating the final reliability of the system and to study the effects of unbiased and biased prior knowledge on the system‐level reliability estimates.  相似文献   

4.
In this article, we define a model for fault detection during the beta testing phase of a software design project. Given sampled data, we illustrate how to estimate the failure rate and the number of faults in the software using Bayesian statistical methods with various different prior distributions. Secondly, given a suitable cost function, we also show how to optimize the duration of a further test period for each one of the prior distribution structures considered. Michael Wiper acknowledges assistance from the Spanish Ministry of Science and Technology via the project BEC2000-0167 and support from projects SEJ2004-03303 and 06/HSE/0181/2004  相似文献   

5.
A Bayesian probabilistic approach is applied for parameter identification of a hysteretic model using laboratory test data in this paper. A hysteretic model for multi-grid composite walls is proposed to model the behavior of multi-grid composite wall specimens under lateral cyclic loading. Effects of stiffness degradation, strength degradation and pinching are considered. The test data consists of observed hysteretic curves of precast composite wall specimen, composite wall specimen reinforced by light steel, retrofitted composite wall specimen and cast-in-place composite wall specimen. Using the Bayesian approach, the identification results are presented in terms of the most probable value and posterior covariance matrix of model parameters. The implied hysteretic and backbone curves with their uncertainties identified based on the test data are compared with their observed counterparts.  相似文献   

6.
The application of finite mixture regression models has recently gained an interest from highway safety researchers because of its considerable potential for addressing unobserved heterogeneity. Finite mixture models assume that the observations of a sample arise from two or more unobserved components with unknown proportions. Both fixed and varying weight parameter models have been shown to be useful for explaining the heterogeneity and the nature of the dispersion in crash data. Given the superior performance of the finite mixture model, this study, using observed and simulated data, investigated the relative performance of the finite mixture model and the traditional negative binomial (NB) model in terms of hotspot identification. For the observed data, rural multilane segment crash data for divided highways in California and Texas were used. The results showed that the difference measured by the percentage deviation in ranking orders was relatively small for this dataset. Nevertheless, the ranking results from the finite mixture model were considered more reliable than the NB model because of the better model specification. This finding was also supported by the simulation study which produced a high number of false positives and negatives when a mis-specified model was used for hotspot identification. Regarding an optimal threshold value for identifying hotspots, another simulation analysis indicated that there is a discrepancy between false discovery (increasing) and false negative rates (decreasing). Since the costs associated with false positives and false negatives are different, it is suggested that the selected optimal threshold value should be decided by considering the trade-offs between these two costs so that unnecessary expenses are minimized.  相似文献   

7.
《TEST》1980,31(1):605-647
Summary The procedure of maximizing the missing information is applied to derive reference posterior probabilities for null hypotheses. The results shed further light on Lindley’s paradox and suggest that a Bayesian interpretation of classical hypothesis testing is possible by providing a one-to-one approximate relationship between significance levels and posterior probabilities.  相似文献   

8.
Experimental evaluation of hotspot identification methods   总被引:1,自引:0,他引:1  
Identifying crash "hotspots", "blackspots", "sites with promise", or "high risk" locations is standard practice in departments of transportation throughout the US. The literature is replete with the development and discussion of statistical methods for hotspot identification (HSID). Theoretical derivations and empirical studies have been used to weigh the benefits of various HSID methods; however, a small number of studies have used controlled experiments to systematically assess various methods. Using experimentally derived simulated data--which are argued to be superior to empirical data, three hot spot identification methods observed in practice are evaluated: simple ranking, confidence interval, and Empirical Bayes. Using simulated data, sites with promise are known a priori, in contrast to empirical data where high risk sites are not known for certain. To conduct the evaluation, properties of observed crash data are used to generate simulated crash frequency distributions at hypothetical sites. A variety of factors is manipulated to simulate a host of 'real world' conditions. Various levels of confidence are explored, and false positives (identifying a safe site as high risk) and false negatives (identifying a high risk site as safe) are compared across methods. Finally, the effects of crash history duration in the three HSID approaches are assessed. The results illustrate that the Empirical Bayes technique significantly outperforms ranking and confidence interval techniques (with certain caveats). As found by others, false positives and negatives are inversely related. Three years of crash history appears, in general, to provide an appropriate crash history duration.  相似文献   

9.
When developing a product, it is important to consider product performance from a user perspective. This type of evaluation can be done through operational testing—assessing the ability of representative users to satisfactorily accomplish tasks with the product in operationally representative environments. This process can be expensive and time-consuming, but is critical to understanding whether the product can adequately do the job for which it was designed. We show how an existing design of experiments (DOEs) process for operational testing can be leveraged to construct a Bayesian adaptive design. This design, nested within the larger design created by the DOE process, allows interim analyses to stop testing early for success or futility. Representative simulations are presented to demonstrate how these interim analyses can be used in an operational test setting, and reductions in necessary test events are shown. The application of Bayesian-adaptive design methods will allow future operational testing to be conducted in less time and at less expense, on average, without compromising the ability of the existing process to verify the product meets the user's needs.  相似文献   

10.
In the analysis of accelerated life testing (ALT) data, some stress‐life model is typically used to relate results obtained at stressed conditions to those at use condition. For example, the Arrhenius model has been widely used for accelerated testing involving high temperature. Motivated by the fact that some prior knowledge of particular model parameters is usually available, this paper proposes a sequential constant‐stress ALT scheme and its Bayesian inference. Under this scheme, test at the highest stress is firstly conducted to quickly generate failures. Then, using the proposed Bayesian inference method, information obtained at the highest stress is used to construct prior distributions for data analysis at lower stress levels. In this paper, two frameworks of the Bayesian inference method are presented, namely, the all‐at‐one prior distribution construction and the full sequential prior distribution construction. Assuming Weibull failure times, we (1) derive the closed‐form expression for estimating the smallest extreme value location parameter at each stress level, (2) compare the performance of the proposed Bayesian inference with that of MLE by simulations, and (3) assess the risk of including empirical engineering knowledge into ALT data analysis under the proposed framework. Step‐by‐step illustrations of both frameworks are presented using a real‐life ALT data set. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

11.
Elías Moreno 《TEST》2005,14(1):181-198
The one-sided testing problem can be naturally formulated as the comparison between two nonnested models. In an objective Bayesian setting, that is, when subjective prior information is not available, no general method exists either for deriving proper prior distributions on parameters or for computing Bayes factor and model posterior probabilities. The encompassing approach solves this difficulty by converting the problem into a nested model comparison for which standard methods can be applied to derive proper priors. We argue that the usual way of encompassing does not have a Bayesian justification. and propose a variant of this method that provides an objective Bayesian solution. The solution proposed here is further extended to the case where nuisance parameters are present and where the hypotheses to be tested are separated by an interval. Some illustrative examples are given for regular and non-regular sampling distributions. This paper has been supported by Ministerio de Ciencia y Tecnología under grant BEC20001-2982  相似文献   

12.
Process capability indices provide numerical measures to compare the output of a process to client's expectations. However, most of the existing researches have used traditional distribution frequency method by using a single sample due to assess process capability. An alternative to this approach is to use the Bayesian method. In this paper, we utilize a Bayesian approach based on subsamples to check process capability via capability index Cpk. As a new suggestion, we used the informative normal prior distribution and the characteristics of sufficient statistic of the parameter to drive the posterior distribution. The capability test is done, and the posterior probability p, for which the process under investigation is capable, is derived both based on the most popular index Cpk. Finally, a numerical example is given to clarify the method. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
This paper develops a methodology to assess the reliability computation model validity using the concept of Bayesian hypothesis testing, by comparing the model prediction and experimental observation, when there is only one computational model available to evaluate system behavior. Time-independent and time-dependent problems are investigated, with consideration of both cases: with and without statistical uncertainty in the model. The case of time-independent failure probability prediction with no statistical uncertainty is a straightforward application of Bayesian hypothesis testing. However, for the life prediction (time-dependent reliability) problem, a new methodology is developed in this paper to make the same Bayesian hypothesis testing concept applicable. With the existence of statistical uncertainty in the model, in addition to the application of a predictor estimator of the Bayes factor, the uncertainty in the Bayes factor is explicitly quantified through treating it as a random variable and calculating the probability that it exceeds a specified value. The developed method provides a rational criterion to decision-makers for the acceptance or rejection of the computational model.  相似文献   

14.
We study sample sizes for testing as required for Bayesian reliability demonstration in terms of failure-free periods after testing, under the assumption that tests lead to zero failures. For the process after testing, we consider both deterministic and random numbers of tasks, including tasks arriving as Poisson processes. It turns out that the deterministic case is worst in the sense that it requires most tasks to be tested. We consider such reliability demonstration for a single type of task, as well as for multiple types of tasks to be performed by one system. We also consider the situation, where tests of different types of tasks may have different costs, aiming at minimal expected total costs, assuming that failure in the process would be catastrophic, in the sense that the process would be discontinued. Generally, these inferences are very sensitive to the choice of prior distribution, so one must be very careful with interpretation of non-informativeness of priors.  相似文献   

15.
余学锋 《计量学报》2000,21(4):314-318
本给出了测量不确定度的Bayes表征方法,该方法是通过对待测对象的测量结果验后分布的分析,获得不确定度表征所需参数值。与古典统计方法相比,Bayes方法具有估计精度高,不确定度的表征更为客观的特点。  相似文献   

16.
This paper compares Evidence Theory (ET) and Bayesian Theory (BT) for uncertainty modeling and decision under uncertainty, when the evidence about uncertainty is imprecise. The basic concepts of ET and BT are introduced and the ways these theories model uncertainties, propagate them through systems and assess the safety of these systems are presented. ET and BT approaches are demonstrated and compared on challenge problems involving an algebraic function whose input variables are uncertain. The evidence about the input variables consists of intervals provided by experts. It is recommended that a decision-maker compute both the Bayesian probabilities of the outcomes of alternative actions and their plausibility and belief measures when evidence about uncertainty is imprecise, because this helps assess the importance of imprecision and the value of additional information. Finally, the paper presents and demonstrates a method for testing approaches for decision under uncertainty in terms of their effectiveness in making decisions.  相似文献   

17.
Control chart could effectively reflect whether a manufacturing process is currently under control or not. The calculation of control limits of the control chart has been focusing on traditional frequency approach, which requires a large sample size for an accurate estimation. A conjugate Bayesian approach is introduced to correct the calculation error of control limits with traditional frequency approach in multi-batch and low volume production. Bartlett’s test, analysis of variance test and standardisation treatment are used to construct a proper prior distribution in order to calculate the Bayes estimators of process distribution parameters for the control limits. The case study indicates that this conjugate Bayesian approach presents better performance than the traditional frequency approach when the sample size is small.  相似文献   

18.
This paper presents two techniques, i.e. the proper orthogonal decomposition (POD) and the stochastic collocation method (SCM), for constructing surrogate models to accelerate the Bayesian inference approach for parameter estimation problems associated with partial differential equations. POD is a model reduction technique that derives reduced‐order models using an optimal problem‐adapted basis to effect significant reduction of the problem size and hence computational cost. SCM is an uncertainty propagation technique that approximates the parameterized solution and reduces further forward solves to function evaluations. The utility of the techniques is assessed on the non‐linear inverse problem of probabilistically calibrating scalar Robin coefficients from boundary measurements arising in the quenching process and non‐destructive evaluation. A hierarchical Bayesian model that handles flexibly the regularization parameter and the noise level is employed, and the posterior state space is explored by the Markov chain Monte Carlo. The numerical results indicate that significant computational gains can be realized without sacrificing the accuracy. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

19.
The usual practice of judging process capability by evaluating point estimates of some process capability indices has a flaw that there is no assessment on the error distributions of these estimates. However, the distributions of these estimates are usually so complicated that it is very difficult to obtain good interval estimates. In this paper we adopt a Bayesian approach to obtain an interval estimation, particularly for the index Cpm. The posterior probability p that the process under investigation is capable is derived; then the credible interval, a Bayesian analogue of the classical confidence interval, can be obtained. We claim that the process is capable if all the points in the credible interval are greater than the pre‐specified capability level ω, say 1.33. To make this Bayesian procedure very easy for practitioners to implement on manufacturing floors, we tabulate the minimum values of Ĉpm/ω, for which the posterior probability p reaches the desirable level, say 95%. For the special cases where the process mean equals the target value for Cpm and equals the midpoint of the two specification limits for Cpk, the procedure is even simpler; only chi‐square tables are needed. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

20.
A building block approach to model validation may proceed through various levels, such as material to component to subsystem to system, comparing model predictions with experimental observations at each level. Usually, experimental data becomes scarce as one proceeds from lower to higher levels. This paper presents a structural equation modeling approach to make use of the lower-level data for higher-level model validation under uncertainty, integrating several components: lower-level data, higher-level data, computational model, and latent variables. The method proposed in this paper uses latent variables to model two sets of relationships, namely, the computational model to system-level data, and lower-level data to system-level data. A Bayesian network with Markov chain Monte Carlo simulation is applied to represent the two relationships and to estimate the influencing factors between them. Bayesian hypothesis testing is employed to quantify the confidence in the predictive model at the system level, and the role of lower-level data in the model validation assessment at the system level. The proposed methodology is implemented for hierarchical assessment of three validation problems, using discrete observations and time-series data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号