首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Poisson and negative binomial (NB) models have been used to analyze traffic accident occurrence at intersections for several years. There are however, limitations in the use of such models. The Poisson model requires the variance-to-mean ratio of the accident data to be about 1. Both the Poisson and the NB models require the accident data to be uncorrelated in time. Due to unobserved heterogeneity and serial correlation in the accident data, both models seem to be inappropriate. A more suitable alternative is the random effect negative binomial (RENB) model, which by treating the data in a time-series cross-section panel, will be able to deal with the spatial and temporal effects in the data. This paper describes the use of RENB model to identify the elements that affect intersection safety. To establish the suitability of the model, several goodness-of-fit statistics are used. The model is then applied to investigate the relationship between accident occurrence and the geometric, traffic and control characteristics of signalized intersections in Singapore. The results showed that 11 variables significantly affected the safety at the intersections. The total approach volumes, the numbers of phases per cycle, the uncontrolled left-turn lane and the presence of a surveillance camera are among the variables that are the highly significant.  相似文献   

2.
In this paper, two-state Markov switching models are proposed to study accident frequencies. These models assume that there are two unobserved states of roadway safety, and that roadway entities (roadway segments) can switch between these states over time. The states are distinct, in the sense that in the different states accident frequencies are generated by separate counting processes (by separate Poisson or negative binomial processes). To demonstrate the applicability of the approach presented herein, two-state Markov switching negative binomial models are estimated using five-year accident frequencies on Indiana interstate highway segments. Bayesian inference methods and Markov Chain Monte Carlo (MCMC) simulations are used for model estimation. The estimated Markov switching models result in a superior statistical fit relative to the standard (single-state) negative binomial model. It is found that the more frequent state is safer and it is correlated with better weather conditions. The less frequent state is found to be less safe and to be correlated with adverse weather conditions.  相似文献   

3.
The so-called Lancaster probabilities on R2 are a class of distributions satisfying an orthogonality condition involving orthogonal polynomials with respect to their marginal laws. They are characterized in the cases where the two identical margins are Gaussian, gamma (the latter are known results, but a new treatment is given), Poisson or negative binomial distributions. Some partial results are obtained in the cases of two different Poisson or negative binomial margins, and also in the case where one margin is gamma and the other margin is negative binomial.  相似文献   

4.
Yan Cui  Fukang Zhu 《TEST》2018,27(2):428-452
Univariate integer-valued time series models, including integer-valued autoregressive (INAR) models and integer-valued generalized autoregressive conditional heteroscedastic (INGARCH) models, have been well studied in the literature, but there is little progress in multivariate models. Although some multivariate INAR models were proposed, they do not provide enough flexibility in modeling count data, such as volatility of numbers of stock transactions. Then, a bivariate Poisson INGARCH model was suggested by Liu (Some models for time series of counts, Dissertations, Columbia University, 2012), but it can only deal with positive cross-correlation between two components. To remedy this defect, we propose a new bivariate Poisson INGARCH model, which is more flexible and allows for positive or negative cross-correlation. Stationarity and ergodicity of the new process are established. The maximum likelihood method is used to estimate the unknown parameters, and consistency and asymptotic normality for estimators are given. A simulation study is given to evaluate the estimators for parameters of interest. Real and artificial data examples are illustrated to demonstrate good performances of the proposed model relative to the existing model.  相似文献   

5.
In this paper, we propose a Bayesian method for modelling count data by Poisson, binomial or negative binomial distributions. These three distributions have in common that the variance is, at most, a quadratic function of the mean. We use prior distributions on the variance function coefficients to consider simultaneously the three possible models and decide which one fits the data better. This approach sheds new light on the analysis of the Sibship data (Sokal and Rohlf, 1987). The Jeffreys-Lindley paradox is discussed through some illustrations.  相似文献   

6.
The usual assumption of a Poisson model for the number of chromosome aberrations in controlled calibration experiments implies variance equal to the mean. However, it is known that chromosome aberration data from experiments involving high linear energy transfer radiations can be overdispersed, i.e. the variance is greater than the mean. Present methods for dealing with overdispersed chromosome data rely on frequentist statistical techniques. In this paper. the problem of overdispersion is considered from a Bayesian standpoint. The Bayes Factor is used to compare Poisson and negative binomial models for two previously published calibration data sets describing the induction of dicentric chromosome aberrations by high doses of neutrons. Posterior densities for the model parameters, which characterise dose response and overdispersion are calculated and graphed. Calibrative densities are derived for unknown neutron doses from hypothetical radiation accident data to deterimine the impact of different model assumptions on dose estimates. The main conclusion is that an initial assumption of a negative binomial model is the conservative approach to chromosome dosimetry for high LET radiations.  相似文献   

7.
Crash data are collected through police reports and integrated with road inventory data for further analysis. Integrated police reports and inventory data yield correlated multivariate data for roadway entities (e.g., segments or intersections). Analysis of such data reveals important relationships that can help focus on high-risk situations and coming up with safety countermeasures. To understand relationships between crash frequencies and associated variables, while taking full advantage of the available data, multivariate random-parameters models are appropriate since they can simultaneously consider the correlation among the specific crash types and account for unobserved heterogeneity. However, a key issue that arises with correlated multivariate data is the number of crash-free samples increases, as crash counts have many categories. In this paper, we describe a multivariate random-parameters zero-inflated negative binomial (MRZINB) regression model for jointly modeling crash counts. The full Bayesian method is employed to estimate the model parameters. Crash frequencies at urban signalized intersections in Tennessee are analyzed. The paper investigates the performance of MZINB and MRZINB regression models in establishing the relationship between crash frequencies, pavement conditions, traffic factors, and geometric design features of roadway intersections. Compared to the MZINB model, the MRZINB model identifies additional statistically significant factors and provides better goodness of fit in developing the relationships. The empirical results show that MRZINB model possesses most of the desirable statistical properties in terms of its ability to accommodate unobserved heterogeneity and excess zero counts in correlated data. Notably, in the random-parameters MZINB model, the estimated parameters vary significantly across intersections for different crash types.  相似文献   

8.
9.
The aim of many occupational safety interventions is to reduce the incidence of injury. However, when measuring intervention effectiveness within a period, population-based accident count data typically contain a large proportion of zero observations (no injury). This situation is compounded where injuries are categorized in a binary manner according to an outcome of interest. The distribution thus comprises a point mass at zero mixed with a non-degenerate parametric component, such as the bivariate Poisson. In this paper, a bivariate zero-inflated Poisson (BZIP) regression model is proposed to evaluate a participatory ergonomics team intervention conducted within the cleaning services department of a public teaching hospital. The findings highlight that the BZIP distribution provided a satisfactory fit to the data, and that the intervention was associated with a significant reduction in overall injury incidence and the mean number of musculoskeletal (MLTI) injuries, while the decline in injuries of a non-musculoskeletal (NMLTI) nature was marginal. In general, the method can be applied to assess the effectiveness of intervention trials on other populations at high risk of occupational injury.  相似文献   

10.
One of the major tasks of police stations is the management of local road traffic accidents. Proper prevention policy which reflects the local accident characteristics could immensely help individual police stations in decreasing various severity levels of road traffic accidents. In order to relate accident variation to local driving environmental characteristics, we use both cluster analysis and Poisson regression. The fitted result at the level of each cluster for each type of accident severity is utilized as an input to quality function deployment. Quality function deployment (QFD) has been applied to customer satisfaction in various industrial quality improvement settings, where several types of customer requirements are related to various control factors. We show how QFD enables one to set priorities on various road accident control policies to which each police station has to pay particular attention.  相似文献   

11.
The duration of freeway traffic accidents duration is an important factor, which affects traffic congestion, environmental pollution, and secondary accidents. Among previous studies, the M5P algorithm has been shown to be an effective tool for predicting incident duration. M5P builds a tree-based model, like the traditional classification and regression tree (CART) method, but with multiple linear regression models as its leaves. The problem with M5P for accident duration prediction, however, is that whereas linear regression assumes that the conditional distribution of accident durations is normally distributed, the distribution for a “time-to-an-event” is almost certainly nonsymmetrical. A hazard-based duration model (HBDM) is a better choice for this kind of a “time-to-event” modeling scenario, and given this, HBDMs have been previously applied to analyze and predict traffic accidents duration. Previous research, however, has not yet applied HBDMs for accident duration prediction, in association with clustering or classification of the dataset to minimize data heterogeneity. The current paper proposes a novel approach for accident duration prediction, which improves on the original M5P tree algorithm through the construction of a M5P-HBDM model, in which the leaves of the M5P tree model are HBDMs instead of linear regression models. Such a model offers the advantage of minimizing data heterogeneity through dataset classification, and avoids the need for the incorrect assumption of normality for traffic accident durations. The proposed model was then tested on two freeway accident datasets. For each dataset, the first 500 records were used to train the following three models: (1) an M5P tree; (2) a HBDM; and (3) the proposed M5P-HBDM, and the remainder of data were used for testing. The results show that the proposed M5P-HBDM managed to identify more significant and meaningful variables than either M5P or HBDMs. Moreover, the M5P-HBDM had the lowest overall mean absolute percentage error (MAPE).  相似文献   

12.
Crash data can often be characterized by over-dispersion, heavy (long) tail and many observations with the value zero. Over the last few years, a small number of researchers have started developing and applying novel and innovative multi-parameter models to analyze such data. These multi-parameter models have been proposed for overcoming the limitations of the traditional negative binomial (NB) model, which cannot handle this kind of data efficiently. The research documented in this paper continues the work related to multi-parameter models. The objective of this paper is to document the development and application of a flexible NB generalized linear model with randomly distributed mixed effects characterized by the Dirichlet process (NB-DP) to model crash data. The objective of the study was accomplished using two datasets. The new model was compared to the NB and the recently introduced model based on the mixture of the NB and Lindley (NB-L) distributions. Overall, the research study shows that the NB-DP model offers a better performance than the NB model once data are over-dispersed and have a heavy tail. The NB-DP performed better than the NB-L when the dataset has a heavy tail, but a smaller percentage of zeros. However, both models performed similarly when the dataset contained a large amount of zeros. In addition to a greater flexibility, the NB-DP provides a clustering by-product that allows the safety analyst to better understand the characteristics of the data, such as the identification of outliers and sources of dispersion.  相似文献   

13.
Safety is a key concern in the design, operation and development of light rail systems including trams or streetcars as they impose crash risks on road users in terms of crash frequency and severity. The aim of this study is to identify key traffic, transit and route factors that influence tram-involved crash frequencies along tram route sections in Melbourne. A random effects negative binomial (RENB) regression model was developed to analyze crash frequency data obtained from Yarra Trams, the tram operator in Melbourne. The RENB modelling approach can account for spatial and temporal variations within observation groups in panel count data structures by assuming that group specific effects are randomly distributed across locations. The results identify many significant factors effecting tram-involved crash frequency including tram service frequency (2.71), tram stop spacing (−0.42), tram route section length (0.31), tram signal priority (−0.25), general traffic volume (0.18), tram lane priority (−0.15) and ratio of platform tram stops (−0.09). Findings provide useful insights on route section level tram-involved crashes in an urban tram or streetcar operating environment. The method described represents a useful planning tool for transit agencies hoping to improve safety performance.  相似文献   

14.
A chemical kinetics model was proposed to describe the abrasive size effect on chemical mechanical polishing (CMP). The model is based on the consideration of a pad as a sort of catalyst and the re-adhering of abrasives due to the large size. Therefore, a general equation was deduced according the chemical kinetics methodology to give the meanings of the size effect. Finally, according a set of data related to the abrasive size effect on CMP, a possible form can be PR=αCCCTXACWAn/[β+γXACWAn] where α, β, γ and n are the parameters in a CMP system.  相似文献   

15.
The literature is replete with models that examine various aspects of cellular manufacturing (CM), such as optimisation of cell layouts. However, many firms may realise zero to marginal returns from CM. Given this uncertainty, the manager should first determine the value of CM to the firm before deploying it. Although traditional valuation models employing discounted cash flow analysis allow for uncertainty, they treat future investments as fixed when computing the investment’s present value. The real options (RO) logic of valuation allows the manager to exercise the option to invest in or abandon a project based on expected outcomes. Future investments are thus options. This paper presents an RO model for CM migration that addresses whether a firm should migrate to CM; and it prescribes the sequence of cell deployment, which has not been addressed in the literature. Our model is also much more transparent and accessible to practitioners, with an accompanying software tool for prospective users. Finally, we use simulation extensively to discover the drivers of the optimal cell deployment sequence. Our results show that there is a complex interplay between net present value, speed of cellularisation, inter-cell learning and volatility in terms of their influence on the cell sequence.  相似文献   

16.
During the last few decades, the two-fluid model and its two parameters have been widely used in transportation engineering to represent the quality of operational traffic service on urban arterials. Catastrophe models have also often been used to describe traffic flow on freeway sections. This paper demonstrates the possibility of developing a pro-active network screening tool that estimates the crash rate using a stochastic cusp catastrophe model with the two-fluid model's parameters as inputs. The paper investigates the analogy in logic behind the two-fluid model and the catastrophe model using straightforward graphical illustrations. The paper then demonstrates the application of two-fluid model parameters to a stochastic catastrophe model designed to estimate the level of safety on urban arterials. Current road safety management, including network safety screening, is post-active rather than pro-active in the sense that an existing hotspot must be identified before a safety improvement program can be implemented. This paper suggests that a stochastic catastrophe model can help us to become more pro-active by helping us to identify urban arterials that currently show an acceptable level of safety, but which are vulnerable to turning into crash hotspots. We would then be able to implement remedial actions before hotspots develop.  相似文献   

17.
Replacing traditional energy sources with renewable energy sources is an effective way to achieve emission reduction targets. Focusing on OECD countries from 1990 to 2018, this study examines the determinants of renewable energy innovation by applying a negative binomial model. There are four main findings: (1) Renewable energy patents show an inverted U-shaped curve, peaking in 2010; solar energy accounts for the largest share of patents; and the US is the largest renewable energy innovator, followed by South Korea and Germany. (2) Renewable electricity installed capacity, share of expenditure on research and development (R&D) of GDP, and implementation of the Kyoto Protocol are all found to promote innovation; by comparison, the proportion of renewable energy power generation of the total electricity generating capacity shows a negative effect. The price of crude oil shows no significant effect due to the offset effect between the European and non-European country groups. (3) Share of R&D expenditure of GDP is confirmed to be the force driving technological progress in the solar, geothermal, and marine sectors, and it plays a more important role in Japan than in the US or Europe. Implementation of the Kyoto Protocol has no significant effect on innovation in European countries. (4) Three institutional factors—namely, the legal system and property rights; regulations; and freedom to trade internationally—are confirmed to be the driving forces, whereas this is not the case for the growth and free circulation of money. Policy implications for the optimization of the renewable energy sector's structure, the enhancement of renewable energy capacity, and the improvement of R&D investment and the institutional environment are proposed. Future research should shed light on a broader sample, using micro-level and socio-technical analysis.  相似文献   

18.
A Bayesian hierarchical model for accident and injury surveillance   总被引:1,自引:0,他引:1  
This article presents a recent study which applies Bayesian hierarchical methodology to model and analyse accident and injury surveillance data. A hierarchical Poisson random effects spatio-temporal model is introduced and an analysis of inter-regional variations and regional trends in hospitalisations due to motor vehicle accident injuries to boys aged 0-24 in the province of British Columbia, Canada, is presented. The objective of this article is to illustrate how the modelling technique can be implemented as part of an accident and injury surveillance and prevention system where transportation and/or health authorities may routinely examine accidents, injuries, and hospitalisations to target high-risk regions for prevention programs, to evaluate prevention strategies, and to assist in health planning and resource allocation. The innovation of the methodology is its ability to uncover and highlight important underlying structure of the data. Between 1987 and 1996, British Columbia hospital separation registry registered 10,599 motor vehicle traffic injury related hospitalisations among boys aged 0-24 who resided in British Columbia, of which majority (89%) of the injuries occurred to boys aged 15-24. The injuries were aggregated by three age groups (0-4, 5-14, and 15-24), 20 health regions (based of place-of-residence), and 10 calendar years (1987 to 1996) and the corresponding mid-year population estimates were used as 'at risk' population. An empirical Bayes inference technique using penalised quasi-likelihood estimation was implemented to model both rates and counts, with spline smoothing accommodating non-linear temporal effects. The results show that (a) crude rates and ratios at health region level are unstable, (b) the models with spline smoothing enable us to explore possible shapes of injury trends at both the provincial level and the regional level, and (c) the fitted models provide a wealth of information about the patterns (both over space and time) of the injury counts, rates and ratios. During the 10-year period, high injury risk ratios evolved from northwest to central-interior and the southeast [corrected].  相似文献   

19.
A model of traffic crashes in New Zealand   总被引:3,自引:0,他引:3  
The aim of this study was to examine the changes in the trend and seasonal patterns in fatal crashes in New Zealand in relation to changes in economic conditions between 1970 and 1994. The Harvey and Durbin (Journal of the Royal Statistical Society 149 (3) (1986) 187-227) structural time series model (STSM), an 'unobserved components' class of model, was used to estimate models for quarterly fatal traffic crashes. The dependent variable was modelled as the number of crashes and three variants of the crash rate (crashes per 10,000 km travelled, crashes per 1,000 vehicles, and crashes per 1000 population). Independent variables included in the models were unemployment rate (UER), real gross domestic product per capita, the proportion of motorcycles, the proportion of young males in the population, alcohol consumption per capita, the open road speed limit, and dummy variables for the 1973 and 1979 oil crises and seat belt wearing laws. UERs, real GDP per capita, and alcohol consumption were all significant and important factors in explaining the short-run dynamics of the models. In the long-run, real GDP per capita was directly related to the number of crashes but after controlling for distance travelled was not significant. This suggests increases in income are associated with a short-run reduction in risk but increases in exposure to a crash (i.e. distance travelled) in the long-run. A 1% increase in the open road speed limit was associated with a long-run 0.5% increase in fatal crashes. Substantial reductions in fatal crashes were associated with the 1979 oil crisis and seat belt wearing laws. The 1984 universal seat belt wearing law was associated with a sustained 15.6% reduction in fatal crashes. These road policy factors appeared to have a greater influence on crashes than the role of demographic and economic factors.  相似文献   

20.
Efficient handling of containers at a terminal can reduce the overall vessel sojourn times and minimise operational costs. The internal transport of containers in these terminals is performed by vehicles that share a common guide path. The throughput capacity of a terminal may increase by increasing the number of vehicles; however, simultaneously congestion may reduce the effective vehicle speed. We model this situation accurately using a traffic flow-based closed queuing network model. The vehicle internal transport is modelled using a load-dependent server that captures the interaction between the number of vehicles in a transport segment and the effective vehicle speed. Using a non-linear traffic flow model, we show that the throughput reductions due to vehicle congestion can be as large as 85%. Hence, the effect of vehicle congestion during internal transport cannot be ignored. The model can also be used to determine the appropriate number of vehicles required to achieve the required terminal throughput by explicitly considering the effect of vehicle congestion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号