首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper documents the application of the Conway–Maxwell–Poisson (COM-Poisson) generalized linear model (GLM) for modeling motor vehicle crashes. The COM-Poisson distribution, originally developed in 1962, has recently been re-introduced by statisticians for analyzing count data subjected to over- and under-dispersion. This innovative distribution is an extension of the Poisson distribution. The objectives of this study were to evaluate the application of the COM-Poisson GLM for analyzing motor vehicle crashes and compare the results with the traditional negative binomial (NB) model. The comparison analysis was carried out using the most common functional forms employed by transportation safety analysts, which link crashes to the entering flows at intersections or on segments. To accomplish the objectives of the study, several NB and COM-Poisson GLMs were developed and compared using two datasets. The first dataset contained crash data collected at signalized four-legged intersections in Toronto, Ont. The second dataset included data collected for rural four-lane divided and undivided highways in Texas. Several methods were used to assess the statistical fit and predictive performance of the models. The results of this study show that COM-Poisson GLMs perform as well as NB models in terms of GOF statistics and predictive performance. Given the fact the COM-Poisson distribution can also handle under-dispersed data (while the NB distribution cannot or has difficulties converging), which have sometimes been observed in crash databases, the COM-Poisson GLM offers a better alternative over the NB model for modeling motor vehicle crashes, especially given the important limitations recently documented in the safety literature about the latter type of model.  相似文献   

2.
Quadrature rules are developed for exactly integrating products of polynomials and generalized functions over triangular and tetrahedral domains. These quadrature rules greatly simplify the implementation of finite element methods that involve integrals over volumes and interfaces that are not coincident with the element boundaries. Specifically, the integrands considered here consist of a quadratic polynomial multiplied by a Heaviside or Dirac delta function operating on a linear polynomial. This form allows for exact integration of expressions obtained from linear finite elements over domains and interfaces defined by a linear level set function. Exact quadrature rules are derived that involve fixed quadrature point locations with weights that depend continuously on the nodal level set values. Compared with methods involving explicit integration over subdomains, the quadrature rules developed here accommodate degenerate interface geometries without any need for special consideration and provide analytical Jacobian information describing the dependence of the integrals on the nodal level set values. The accuracy of the method is demonstrated for a simple conduction problem with the Neumann and Robin‐type boundary conditions. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

3.
The process capability index (PCI) is a quality control–related statistic mostly used in the manufacturing industry, which is used to assess the capability of some monitored process. It is of great significance to quality control engineers as it quantifies the relation between the actual performance of the process and the preset specifications of the product. Most of the traditional PCIs performed well when process follows the normal behaviour. However, using these traditional indices to evaluate a non‐normally distributed process often leads to inaccurate results. In this article, we consider a new PCI, Cpy, suggested by Maiti et al, which can be used for normal as well as non‐normal random variables. This article addresses the different methods of estimation of the PCI Cpy from both frequentist and Bayesian view points of generalized Lindley distribution suggested by Nadarajah et al. We briefly describe different frequentist approaches, namely, maximum likelihood estimators, least square and weighted least square estimators, and maximum product of spacings estimators. Next, we consider Bayes estimation under squared error loss function using gamma priors for both shape and scale parameters for the considered model. We use Tierney and Kadane's method as well as Markov Chain Monte Carlo procedure to compute approximate Bayes estimates. Besides, two parametric bootstrap confidence intervals using frequentist approaches are provided to compare with highest posterior density credible intervals. Furthermore, Monte Carlo simulation study has been carried out to compare the performances of the classical and the Bayes estimates of Cpy in terms of mean squared errors along with the average width and coverage probabilities. Finally, two real data sets have been analysed for illustrative purposes.  相似文献   

4.
We give a necessary and sufficient condition such that the class of $p$ -ary binomial functions proposed by Jia et al. (IEEE Trans Inf Theory 58(9):6054–6063, 2012) are regular bent functions, and thus settle the open problem raised at the end of that paper. Moreover, we investigate the bentness of the proposed binomials under the case $\gcd (\frac{t}{2}, p^{\frac{n}{2}}+1)=1$ for some even integers $t$ and $n$ . Computer experiments show that the new class contains bent functions that are affinely inequivalent to known monomial and binomial ones.  相似文献   

5.
ABSTRACT

The design of an experiment can always be considered at least implicitly Bayesian, with prior knowledge used informally to aid decisions such as the variables to be studied and the choice of a plausible relationship between the explanatory variables and measured responses. Bayesian methods allow uncertainty in these decisions to be incorporated into design selection through prior distributions that encapsulate information available from scientific knowledge or previous experimentation. Further, a design may be explicitly tailored to the aim of the experiment through a decision-theoretic approach using an appropriate loss function. We review the area of decision-theoretic Bayesian design, with particular emphasis on recent advances in computational methods. For many problems arising in industry and science, experiments result in a discrete response that is well described by a member of the class of generalized linear models. Bayesian design for such nonlinear models is often seen as impractical as the expected loss is analytically intractable and numerical approximations are usually computationally expensive. We describe how Gaussian process emulation, commonly used in computer experiments, can play an important role in facilitating Bayesian design for realistic problems. A main focus is the combination of Gaussian process regression to approximate the expected loss with cyclic descent (coordinate exchange) optimization algorithms to allow optimal designs to be found for previously infeasible problems. We also present the first optimal design results for statistical models formed from dimensional analysis, a methodology widely employed in the engineering and physical sciences to produce parsimonious and interpretable models. Using the famous paper helicopter experiment, we show the potential for the combination of Bayesian design, generalized linear models, and dimensional analysis to produce small but informative experiments.  相似文献   

6.
The main feature of partition of unity methods such as the generalized or extended finite element method is their ability of utilizing a priori knowledge about the solution of a problem in the form of enrichment functions. However, analytical derivation of enrichment functions with good approximation properties is mostly limited to two-dimensional linear problems. This paper presents a procedure to numerically generate proper enrichment functions for three-dimensional problems with confined plasticity where plastic evolution is gradual. This procedure involves the solution of boundary value problems around local regions exhibiting nonlinear behavior and the enrichment of the global solution space with the local solutions through the partition of unity method framework. This approach can produce accurate nonlinear solutions with a reduced computational cost compared to standard finite element methods since computationally intensive nonlinear iterations can be performed on coarse global meshes after the creation of enrichment functions properly describing localized nonlinear behavior. Several three-dimensional nonlinear problems based on the rate-independent J 2 plasticity theory with isotropic hardening are solved using the proposed procedure to demonstrate its robustness, accuracy and computational efficiency.  相似文献   

7.
Models encountered in computational mechanics could involve many time scales. When these time scales cannot be separated, one must solve the evolution model in the entire time interval by using the finest time step that the model implies. In some cases, the solution procedure becomes cumbersome because of the extremely large number of time steps needed for integrating the evolution model in the whole time interval. In this paper, we considered an alternative approach that lies in separating the time axis (one-dimensional in nature) in a multidimensional time space. Then, for circumventing the resulting curse of dimensionality, the proper generalized decomposition was applied allowing a fast solution with significant computing time savings with respect to a standard incremental integration. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

8.
This paper investigates the characteristics of extreme events for series systems. A methodology is developed to determine the Gumbel Type of a series system, given that the Gumbel Types of the components are known. Determining the propagation of the Gumbel Types can be accomplished without knowing the exact probability density functions of the components and without calculating the analytic form for the distribution of the overall system. In addition, an analytical technique is developed to determine the parameters of the extreme value distribution—the characteristic largest value, and the inverse measure of dispersion—for the overall series system. Finally, these three pieces of information—the Gumbel Type, the characteristic largest value, and the inverse measure of dispersion—are combined to calculate a conditional expected value of extreme events of the overall series system.  相似文献   

9.
The constrained optimization of resource allocation to minimize the probability of failure of an engineered system relies on a probabilistic risk analysis of that system, and on ‘risk/cost functions’. These functions describe, for each possible improvement of the system's robustness, the corresponding gain of reliability given the considered component or management factor to be upgraded. These improvements can include, for example, the choice of components of different robustness levels (at different costs), addition of redundancies, or changes in operating and maintenance procedures. The optimization model is generally constrained by a maximum budget, a schedule deadline, or a maximum number of qualified personnel. A key question is thus the nature of the risk/cost function linking the costs involved and the corresponding failure-risk reduction. Most of the methods proposed in the past have relied on continuous, convex risk/cost functions reflecting decreasing marginal returns. In reality, the risk/cost functions can be simple step functions (e.g. a discrete choice among possible components), discontinuous functions characterized by continuous segments between points of discontinuity (e.g. a discrete choice among components that can be of continuously increasing levels of robustness), or continuous functions (e.g. exponentially decreasing failure risk with added resources).This paper describes a general method for the optimization of the robustness of a complex engineered system in which all three risk/cost function types may be relevant. We illustrate the method with a satellite design problem. We conclude with a discussion of the complexity of the resolution of this general type of optimization problem given the number and the types of variables involved.  相似文献   

10.
Accident data sets can include some unusual data points that are not typical of the rest of the data. The presence of these data points (usually termed outliers) can have a significant impact on the estimates of the parameters of safety performance functions (SPFs). Few studies have considered outliers analysis in the development of SPFs. In these studies, the practice has been to identify and then exclude outliers from further analysis. This paper introduces alternative mixture models based on the multivariate Poisson lognormal (MVPLN) regression. The proposed approach presents outlier resistance modeling techniques that provide robust safety inferences by down-weighting the outlying observations rather than rejecting them. The first proposed model is a scale-mixture model that is obtained by replacing the normal distribution in the Poisson-lognormal hierarchy by the Student t distribution, which has heavier tails. The second model is a two-component mixture (contaminated normal model) where it is assumed that most of the observations come from a basic distribution, whereas the remaining few outliers arise from an alternative distribution that has a larger variance. The results indicate that the estimates of the extra-Poisson variation parameters were considerably smaller under the mixture models leading to higher precision. Also, both mixture models have identified the same set of outliers. In terms of goodness-of-fit, both mixture models have outperformed the MVPLN. The outlier rejecting MVPLN model provided a superior fit in terms of a much smaller DIC and standard deviations for the parameter estimates. However, this approach tends to underestimate uncertainty by producing too small standard deviations for the parameter estimates, which may lead to incorrect conclusions. It is recommended that the proposed outlier resistance modeling techniques be used unless the exclusion of the outlying observations can be justified because of data related reasons (e.g., data collection errors).  相似文献   

11.
In a complex manufacturing environment, there are hundreds of interrelated processes that form a complex hierarchy. This is especially true of semiconductor manufacturing. In such an environment, modeling and understanding the impact of critical process parameters on final performance measures such as defectivity is a challenging task. In addition, a number of modeling issues such as a small number of observations compared to process variables, difficulty in formulating a high‐dimensional design matrix, and missing data due to failures pose challenges in using empirical modeling techniques such as classical linear modeling as well as generalized linear modeling (GLM) approaches. Our approach is to utilize GLM in a hierarchical structure to understand the impact of key process and subprocess variables on the system output. A two‐level approach, comprising subprocess modeling and meta‐modeling, is presented and modeling related issues such as bias and variance estimation are considered. The hierarchical GLM approach helps not only in improving output measures, but also in identifying and improving subprocess variables attributed to poor output quality. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

12.
Many of the formulations of current research interest, including iosogeometric methods and the extended finite element method, use nontraditional basis functions. Some, such as subdivision surfaces, may not have convenient analytical representations. The concept of an element, if appropriate at all, no longer coincides with the traditional definition. Developing a new software for each new class of basis functions is a large research burden, especially, if the problems involve large deformations, non‐linear materials, and contact. The objective of this paper is to present a method that separates as much as possible the generation and evaluation of the basis functions from the analysis, resulting in a formulation that can be implemented within the traditional structure of a finite element program but that permits the use of arbitrary sets of basis functions that are defined only through the input file. Elements ranging from a traditional linear four‐node tetrahedron through a higher‐order element combining XFEM and isogeometric analysis may be specified entirely through an input file without any additional programming. Examples of this framework to applications with Lagrange elements, isogeometric elements, and XFEM basis functions for fracture are presented. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

13.
We obtain in operator form the solution of a heat-transfer problem for bodies under generalized thermal effects and determine the structure of the transfer functions. We propose approximate equations for the interaction between the effects in order to calculate the mean volume and mean surface temperatures of the body.Translated from Inzhenerno-Fizicheskii Zhurnal, Vol. 19, No. 6, pp. 1110–1117, December, 1970.  相似文献   

14.
The transfer functions and approximate differential equations of the interrelation between the temperature of a body and generalized thermal effects are obtained. The coefficients of the transfer functions are found with consideration of the shape of the investigated body.Translated from Inzhenerno-Fizicheskii Zhurnal, Vol. 18, No. 5, pp. 892–898, May, 1970.  相似文献   

15.
Binary test outcomes typically result from dichotomizing a continuous test variable, observable or latent. The effect of the threshold for test positivity on test sensitivity and specificity has been studied extensively in receiver operating characteristic (ROC) analysis. However, considerably less attention has been given to the study of the effect of the positivity threshold on the predictive value of a test. In this paper we present methods for the joint study of the positive (PPV) and negative predictive values (NPV) of diagnostic tests. We define the predictive receiver operating characteristic (PROC) curve that consists of all possible pairs of PPV and NPV as the threshold for test positivity varies. Unlike the simple trade-off between sensitivity and specificity exhibited in the ROC curve, the PROC curve displays what is often a complex interplay between PPV and NPV as the positivity threshold changes. We study the monotonicity and other geometric properties of the PROC curve and propose summary measures for the predictive performance of tests. We also formulate and discuss regression models for the estimation of the effects of covariates.  相似文献   

16.
In this paper, a generalization of the characteristic finite clement method for compressible. viscous gas dynamics is presented. Some preliminary numerical results illustrate the concept of the method.  相似文献   

17.
18.
Abstract

Parameter estimation with variable sampling time is developed in this paper, using a continuous model. The input‐output variables are approximated by using generalized block pulse series expansion. A method of determining the sampling interval is proposed. The algorithm depends on the variations of the input and output variables. Parameter estimation is carried out by using a least‐squares estimation with exponential data weighting. Two examples are presented to demonstrate that the method exhibits satisfactory results.  相似文献   

19.
Collision modification factors (CMFs) are considered the primary tools for estimating the effectiveness of safety treatments at road sites. Three main techniques are commonly used to estimate CMFs: the empirical Bayes (EB) method, the comparison-group (CG) method, and a combination of the EB and CG methods. CMF estimates from these techniques are usually provided with a measure of uncertainty of the estimate, in the form of standard error and confidence interval.  相似文献   

20.
Active safety functions intended to prevent vehicle crashes are becoming increasingly prominent in traffic safety. Successful evaluation of their effects needs to be based on a conceptual framework, i.e. agreed-upon concepts and principles for defining evaluation scenarios, performance metrics and pass/fail criteria. The aim of this paper is to suggest some initial ideas toward such a conceptual framework for active safety function evaluation, based on a central concept termed ‘situational control’. Situational control represents the degree of control jointly exerted by a driver and a vehicle over the development of specific traffic situations. The proposed framework is intended to be applicable to the whole evaluation process, from ‘translation’ of accident data into evaluation scenarios and definition of evaluation hypotheses, to selection of performance metrics and criteria. It is also meant to be generic, i.e. applicable to driving simulator and test track experiments as well as field operational tests.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号