首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
A methodology to estimate overall travel time from individual travel time measurements within a time window is presented. To better handle data with complex outlier generation mechanisms, fuzzy clustering techniques have been used to represent relationships between individual travel time data collected within a measuring time window. The data set is considered to be a fuzzy set to which each data point belongs at some degrees of membership. This allows transitions from the main body of data to extreme data points to be treated in a smooth and fuzzy fashion. Two algorithms have been developed based on `point? and `line? fuzzy cluster prototypes. Iterative procedures have been developed to calculate the fuzzy cluster centre and the fuzzy line. A novel estimation method based on time projection of a fuzzy line has been proposed. The method has the advantage of being robust by using a wide time window and the timeliness by employing time projection in resolving the most recent travel time estimation. Unlike deterministic approaches where hard thresholds need to be specified in order to exclude outliers, the proposed methods estimate travel times using all available data and, thus, can be applied in a wide variety of scenarios without fine tuning of the threshold.  相似文献   

2.
The objective of this paper is the analysis of the state-of-the-art in risk indicators and exposure data for safety performance assessment in Europe, in terms of data availability, collection methodologies and use. More specifically, the concepts of exposure and risk are explored, as well as the theoretical properties of various exposure measures used in road safety research (e.g. vehicle- and person-kilometres of travel, vehicle fleet, road length, driver population, time spent in traffic, etc.). Moreover, the existing methods for collecting disaggregate exposure data for risk estimates at national level are presented and assessed, including survey methods (e.g. travel surveys, traffic counts) and databases (e.g. national registers). A detailed analysis of the availability and quality of existing risk exposure data is also carried out. More specifically, the results of a questionnaire survey in the European countries are presented, with detailed information on exposure measures available, their possible disaggregations (i.e. variables and values), their conformity to standard definitions and the characteristics of their national collection methods. Finally, the potential of international risk comparisons is investigated, mainly through the International Data Files with exposure data (e.g. Eurostat, IRTAD, ECMT, UNECE, IRF, etc.). The results of this review confirm that comparing risk rates at international level may be a complex task, as the availability and quality of exposure estimates in European countries varies significantly. The lack of a common framework for the collection and exploitation of exposure data limits significantly the comparability of the national data. On the other hand, the International Data Files containing exposure data provide useful statistics and estimates in a systematic way and are currently the only sources allowing international comparisons of road safety performance under certain conditions.  相似文献   

3.
Uncertain population behaviors in a regional emergency could potentially harm the performance of the region's transportation system and subsequent evacuation effort. The integration of behavioral survey data with travel demand modeling enables an assessment of transportation system performance and the identification of operational and public health countermeasures. This paper analyzes transportation system demand and system performance for emergency management in three disaster scenarios. A two-step methodology first estimates the number of trips evacuating the region, thereby capturing behavioral aspects in a scientifically defensible manner based on survey results, and second, assigns these trips to a regional highway network, using geographic information systems software, thereby making the methodology transferable to other locations. Performance measures are generated for each scenario including maps of volume-to-capacity ratios, geographic contours of evacuation time from the center of the region, and link-specific metrics such as weighted average speed and traffic volume.  相似文献   

4.
为从测量数据中获得尽可能多信息,减少待识别模型参数的不确定性,提出面向结构模型参数识别的传感器优化布置方法。为避免用静态形函数传统有限元方法建模对结构动力特性及传感器优化布置影响,采用高精确动力学法即谱有限元法对结构进行动力学建模。以结构模型参数识别结果的不确定性最小作为传感器优化布置准则,该不确定性程度通过信息熵标量指标量化,用贝叶斯统计系统识别法进行识别。采用整数编码遗传算法在所有可能的传感器配置组合中极小化信息熵指标,获得给定数目的传感器最优布置位置。通过弹性地基带弹性接头的周期管梁模型数值仿真及模型试验验证所提方法。  相似文献   

5.
J.B. Thompson   《Thin solid films》1987,150(2-3):163-174
A straightforward computer-based general methodology is presented which will enable parameter values and associated error estimates to be extracted from experimental thin film data points. The methodology operates on exact thin film relationships and overcomes problems in interpreting results, such as having to resort to the use of approximate thin film relationships.

The methodology is presented within the framework of the well-known Fuchs-Sondheimer model for conduction in thin continuous metal films. However, its general nature means that it is equally applicable to other theoretical thin film models. An illustration of the methodology's use is given by applying it to a set of thin film resistive, temperature coefficient of resistivity and thermoelectric power data obtained from measurements on thin continuous copper films.  相似文献   


6.
粤港澳大湾区高等教育的融合创新是大湾区建设的重要组成部分.粤港澳大湾区依托深厚的岭南传统文化作为设计产业创新的文化优势,有利于创意设计产业与“创意湾区”的协同发展.在设计教育中关于包装设计方面的人才培养,可充分发挥大湾区在地理、人文、产业和区域的优势,结合地域文化在包装设计中的应用,湾区文化产业与科技力量的优势为湾区包装设计提供有力支持,本文从不同角度分析粤港澳大湾区区域设计背景下,高等教育中的文化包装设计教育如何融合湾区优势取得发展.  相似文献   

7.
The required number of AGVs necessary to perform a given level of material handling task in an FMS environment is determined using analytical and simulation modelling. The analytical method involves consideration of load handling time, empty travel time, and waiting and blocking time. Load handling time is computed from given system parameters. Determination of empty vehicle travel is difficult due to the inherent randomness of an FMS. Several research studies for this purpose are discussed and a new model is proposed. It entails formulation of a mixed integer programme with an objective of minimizing empty trips. The constraints are in the form of upper and lower bounds placed on the total number of empty trips starting from or ending at a load transfer station. The phenomena of vehicle waiting and blocking are also discussed. The cumulative impact of these three time estimates are then translated into an initial estimate of AGV fleet size as predicted by individual models. The method is applied to an illustrative example. Finally, simulation methodology is used to validate the initial estimates of fleet size. The results indicate that the different models either under-estimate or over-estimate the actual number of vehicles required in the system. The proposed model, though under-estimates the minimum AGV requirement, yet provides results which are close to the simulation results. Hence, it can be used as an analytical tool prior to the simulation phase of AGVS design.  相似文献   

8.
This paper introduces a model for efficient utilization of warehouse personnel in an inventory selection process. A simple mathematical model, based on a time study that can be applied to establishing standards for efficiency measurement of labour in a warehouse setting is presented. The problem of determining a standard to compute performance in an environment with variable workload is solved by using an adjustable standard for each particular assignment. Specifically the proposed model is designed to estimate the time required to complete a picking cycle. Three time components are considered; the lead time, travel time, and non-efficient time. The results of an empirical study are used to set the values of system parameters.

The use of a computer system that can generate order lists is essential for an effective application of this methodology. Useful information can be obtained that can aid managers in formulating and implementing incentive plans and for controlling the labour cost which represents a major proportion of the total cost in warehouse operation.  相似文献   

9.
This paper presents a disaggregate approach to crash rate analysis. Enumerating crash rates on a per trip-kilometer basis, the proposed method removes the linearity assumption inherent in the conventional quotient indicator of accidents per unit travel distance. The approach involves combining two disparate datasets on a geographic information systems (GIS) platform by matching accident records to a defined travel corridor. As an illustration of the methodology, travel information from the Victorian Activity and Travel Survey (VATS) and accident records contained in CrashStat were used to estimate the crash rates of Melbourne residents in different age-sex groups according to time of the day and day of the week. The results show a polynomial function of a cubic order when crash rates are plotted against age group, which contrasts distinctly with the U-shape curve generated by using the conventional aggregate quotient approach. Owing to the validity of the many assumptions adopted in the computation, this study does not claim that the results obtained are conclusive. The methodology, however, is seen as providing a framework upon which future crash risk measures could be based as the use of spatial tracking devises become prevalent in travel surveys.  相似文献   

10.
The quantification of a fault tree is difficult without an exact probability value for all of the basic events of the tree. To overcome this difficulty in this paper, we propose a methodology which employs ‘hybrid data’ as a tool to analyse the fault tree. The proposed methodology estimates the failure probability of basic events using the statistical analysis of field recorded failures. Under these circumstances, where there is an absence of past failure records, the method follows a fuzzy set based theoretical evaluation based on the subjective judgement of experts for the failure interval. The proposed methodology has been applied to a conveyor system. The results of the analysis reveal the effectiveness of the proposed methodology and the instrumental role played by the experience of experts in providing reliability oriented information. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

11.
A heuristic method is proposed for estimating travel times in unit load random storage systems where incoming loads are dispatched to the closest available storage positions. A queuing model representation is used where servers correspond to storage positions and the service rate is based on the turnover distribution of stored loads. The resultant state distribution is applied to approximate storage position occupancy probabilities useful for generating storage and retrieval travel time estimates. Computational results suggest that the heuristic procedure yields smaller errors in random storage travel time estimates than alternative models.  相似文献   

12.
The incidence of fatality over the period 2010–2014 from automobile accidents in North Cyprus is 2.75 times greater than the average for the EU. With the prospect of North Cyprus entering the EU, many investments will need to be undertaken to improve road safety in order to reach EU benchmarks. The objective of this study is to provide local estimates of the value of a statistical life and injury along with the value of time savings. These are among the parameter values needed for the evaluation of the change in the expected incidence of automotive accidents and time savings brought about by such projects.In this study we conducted a stated choice experiment to identify the preferences and tradeoffs of automobile drivers in North Cyprus for improved travel times, travel costs, and safety. The choice of route was examined using mixed logit models to obtain the marginal utilities associated with each attribute of the routes that consumers choose. These estimates were used to assess the individuals’ willingness to pay (WTP) to avoid fatalities and injuries and to save travel time. We then used the results to obtain community-wide estimates of the value of a statistical life (VSL) saved, the value of injury (VI) prevented, and the value per hour of travel time saved. The estimates for the VSL range from €315,293 to €1,117,856 and the estimates of VI from € 5,603 to € 28,186. These values are consistent, after adjusting for differences in incomes, with the median results of similar studies done for EU countries.  相似文献   

13.
We develop a robust methodology which estimates the consequences of DRG cost weight volatility on hospital performance. The methodology is first developed using the hospital baserate as quantitative measure of hospital performance, then analyzed theoretically in the more general framework of cost-benefit analysis, and finally applied to two groups of hospitals. The first data set consists of a homogeneous group of 21 maximum-care hospitals in Germany and incorporates approximately 936,000 inpatient cases in 2003. The second data set consists of a heterogeneous group of 97 German hospitals and incorporates approximately 896,000 inpatient cases in 2003. The main finding is that the robust cost-benefit methodology developed in this study leads to results that are consistent with the theoretical background, since the hospital baserate spread in the more homogeneous group of hospitals is clearly lower than in the more heterogeneous group of hospitals. Our methodology illustrates the robustness of a hospital’s performance with respect to DRG cost weight changes and, therefore, represents a helpful tool in discussions about hospital budgets, strategic alliances, mergers, etc.  相似文献   

14.
In performing pavement life cycle assessment (LCA), users are facing various reports of energy intensity coefficient (EIC) of pavement materials which differ considerably among data sources and therefore alter the LCA results significantly. Instead of selecting a certain EIC without or of little explanation for the current pavement LCA practices, this study proposed a methodology to build probability density function (PDF) for EIC based on available data-set and their qualities. Each data was first evaluated about the data quality indicator (DQI) through data quality pedigree matrix and converted to PDF in modified Beta distribution form. Three weighting methods, the DQI one, coefficient of variation (COV) one and analytical hierarchy process (AHP) one, were developed to assign weightings for different data. Monte Carlo simulation was run with the weighted PDF of each data as input to obtain the ultimate PDF for EIC. A case study to estimate the bitumen’s EIC with eight data samples were performed using the proposed methodology. It is found (1): the estimates by the proposed methodology is of higher reliability (lower COV) compared to any single data due to utilisation of information of the overall data samples; (2) the AHP weighting method is most favoured despite the results of the three weighting methods are close; (3) the central estimates of bitumen’s EIC are between5.4~5.8 MJ/kg. The proposed methodology is helpful in aiding calculating EICs for pavement materials and capturing uncertainties in LCA results in a statistical sense.  相似文献   

15.
Abstract

We examine whether the presence of alliance firms in the same regional cluster or in close physical proximity influences contracting behaviour of biopharmaceutical companies by enhancing coordination and mitigating the need for control. The literature addressing geographical proximity and alliance contracting fails to make a clear distinction between physical co-location and co-location within a cluster, although the two attributes are conceptually distinct. We find that geographic proximity is not related to contracting behaviour. The impact of co-location within a cluster is more nuanced. Specifically, we find that co-location in the San Francisco Bay Area cluster is associated with less complex contracting; however, co-location in other biotechnology clusters does not seem to be related to contracting behaviour. We believe that the informal business environment characterising the Bay Area cluster, as well as unique roles played by venture capital and law firms located in the Bay Area account for the distinct result.  相似文献   

16.
It is well-known that small area estimation needs explicit or at least implicit use of models (cf. Rao in Small Area Estimation, Wiley, New York, 2003). These model-based estimates can differ widely from the direct estimates, especially for areas with very low sample sizes. While model-based small area estimates are very useful, one potential difficulty with such estimates is that when aggregated, the overall estimate for a larger geographical area may be quite different from the corresponding direct estimate, the latter being usually believed to be quite reliable. This is because the original survey was designed to achieve specified inferential accuracy at this higher level of aggregation. The problem can be more severe in the event of model failure as often there is no real check for validity of the assumed model. Moreover, an overall agreement with the direct estimates at an aggregate level may sometimes be politically necessary to convince the legislators of the utility of small area estimates.  相似文献   

17.
Distribution centres (DCs) are the hubs connecting transport streams in the supply chain. The synchronisation of coming and going cargo at a DC requires reliable arrival times. To achieve this, a reliable method to predict arrival times is needed. A literature review was performed to find the factors that are reported to predict arrival time: congestion, weather, time of day and incidents. While travel time receives considerable attention, there is a gap in literature concerning arrival vs. travel/journey time prediction. None of the reviewed papers investigate arrival time: all the papers found investigate travel time. Arrival time is the consequence of travel time in combination with departure time, so though the travel time literature is applicable, the human factor involved in planning the time of departure can affect the arrival time (especially for truck drivers who have travelled the same route before). To validate the factors that influence arrival time, the authors conducted a detailed case study that includes a survey of 230 truckers, a data analysis and a data mining experiment, using real traffic and weather data. These show that although a ‘big data’ approach delivers valuable insights, the predictive power is not as high as expected; other factors, such as human or organisational factors, could influence arrival time, and it is concluded that such organisational factors should be considered in future predictive models.  相似文献   

18.
Abstract

Through its Department of Defense (DoD) agencies, and outside contractors, the USA invests billions of dollars each year in military construction (MILCON) projects. Although construction management expertise is gained and significant amount of data are collected from past projects, completing projects on time remains a challenge. This article uses data from 466 MILCON projects to identify key factors that influence project duration and provide a new model to predict project time outcomes. The model generates accurate results and serves as a useful tool in the early phases of a project life cycle. Another key contribution of this study is the employed methodology, which includes the use of available data, targeting of relevant parameters, and development of the predictive model. The contributed methodology is applicable outside of the MILCON domain with the appropriate data set and by targeting the relevant influential factors to create models to predict time outcomes of future projects.  相似文献   

19.
The Model Predictive Control (MPC) method has been widely adopted as a useful tool to keep quality on target in manufacturing processes. However, the conventional MPC methods are inadequate for large-scale manufacturing processes particularly in the presence of disturbances. The goal of this paper is to propose a Partial Least Square (PLS)-based MPC methodology to accommodate the characteristics of a large-scale manufacturing process. The detailed objectives are: (i) to identify a reliable prediction model that handles the large-scale "short and fat" data; (ii) to design an effective control model that both maximizes the required quality and minimizes the labor costs associated with changing the process parameters; and (iii) to develop an efficient optimization algorithm that reduces the computational burden of the large-scale optimization. The case study and experimental results demonstrate that the presented MPC methodology provides the set of optimal process parameters for quality improvement. In particular, the quality deviations are reduced by 99.4%, the labor costs by 84.2%, and the computational time by 98.8%. As a result, the proposed MPC method will save on both costs and time in achieving the desired quality for a large-scale manufacturing process.  相似文献   

20.
We present a methodology for the in‐process control of design inspection focusing on escaped defects. The methodology estimates the defect escape probability at each phase in the process using the information available at the beginning of a particular phase. The development of the models is illustrated by a case involving data collected from the design inspections of software components. The data include the size of the product component, as well as the time invested in preparing for the inspection and actually carrying it out. After smoothing the original data with a clustering algorithm, to compensate for its excessive variability, a series of regression models exhibiting increasingly better fits to the data as more information becomes available was obtained. We discuss how management can use such models to reduce escape risk as the inspection process evolves. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号